Linear Algebra II
Exercise Sheet no. 10
Summer term 2011
Prof. Dr. Otto June 15, 2011
Dr. Le Roux Dr. Linshaw
Exercise 1 (Warm-up: self-adjoint maps)
LetV be a finite-dimensional unitary space andϕ∈Hom(V,V). Show that the following are equivalent:
(a) ϕis self-adjoint.
(b) 〈v,ϕ(v)〉 ∈Rfor allv∈V.
Hint: Consider〈v+w,ϕ(v+w)〉and〈v+iw,ϕ(v+iw)〉for the implication (b)⇒(a).
Solution:
(a)⇒(b). Assume thatϕis self-adjoint. Then for allv∈V,
〈Tv,v〉 − 〈Tv,v〉 = 〈Tv,v〉 − 〈v,Tv〉=0, hence〈Tv,v〉 ∈R.
(b)⇒(a). If〈v,ϕ(v)〉 ∈Rfor allv∈V, then for any pair of vectorsv,w∈V:
〈v+w,ϕ(v+w)〉 − 〈v,ϕ(v)〉 − 〈w,ϕ(w)〉= 〈w,ϕ(v)〉+〈v,ϕ(w)〉
〈v+iw,ϕ(v+iw)〉 − 〈v,ϕ(v)〉 − 〈w,ϕ(w)〉=i 〈w,ϕ(v)〉 − 〈v,ϕ(w)〉
∈R,
whenceim(〈w,ϕ(v)〉) =−im(〈v,ϕ(w)〉)andRe(〈w,ϕ(v)〉) =Re(〈v,ϕ(w)〉). It follows that
〈v,ϕ(w)〉=〈w,ϕ(v)〉=〈ϕ(v),w〉=〈v,ϕ+(w)〉
for allv,w∈V, henceϕ=ϕ+. Exercise 2 (Eigenvalues)
LetV be a finite dimensional vector space andϕ,ψbe endomorphisms ofV. Prove thatλis an eigenvalue ofϕ◦ψif and only if it is an eigenvalue ofψ◦ϕ.
Hint: It may help to distinguish cases according to whetherλ6=0orλ=0.
Extra: Can you give a counterexample in caseV is infinite dimensional?
Solution:
Letλbe an eigenvalue ofϕ◦ψ. We have two cases:
a) λ6=0:
There is an eigenvectorv6=0with(ϕ◦ψ)(v) =λv. This yields
(ψ◦ϕ)(ψ(v)) = (ψ◦ϕ)◦ψ(v) = ψ◦(ϕ◦ψ)(v) =ψ (ϕ◦ψ)(v)=ψ(λv) =λψ(v).
Becauseλ6=0andv6=0we know thatψ(v)6=0as well (otherwiseλv= (ϕ◦ψ)(v) =ϕ(ψ(v)) =ϕ(0) =0, a contradiction). Thusψ(v)is an eigenvector ofψ◦ϕwith eigenvalueλ.
b) λ=0:
AsV is finite dimensional, the mapsϕandψcan be described with respect to a basis ofV by matricesAresp.B.
Sinceλ=0is a root ofdet(AB−λE) =0,det(AB) =0. This implies thatdet(BA) =det(B)det(A) =det(A)det(B) = det(AB) =0, and thereforeλ=0is a solution ofdet(BA−λE) =0, i.e.λis an eigenvalue ofψ◦ϕ.
Extra: LetV be an infinite dimensional euclidean space andUbe a proper subspace ofV such that there exists an isomorphismϕ:V →U. Takeψ:=πU.
1
Exercise 3 (Self-adjoint and unitary maps)
LetV be a finite-dimensional unitary space andϕ∈Hom(V,V)be a normal endomorphism. Show the following.
(a) ϕis self-adjoint if and only if all the eigenvalues ofϕare real.
(b) ϕis unitary if and only if all the eigenvalues ofϕhave absolute value1.
Solution:
a) ⇒By Proposition 2.4.5.
⇐Assume conversely that all the eigenvalues are real. Sinceϕ is normal, there exists an orthonormal basis of eigenvectors ofϕ, which are at the same time eigenvectors ofϕ+. With respect to this basis, bothϕandϕ+are represented by diagonal matrices. These matrices are identical, since the eigenvalues ofϕ+are the conjugates of the eigenvalues ofϕ, and these are real.
b) ⇒Ifϕis unitary andλis an eigenvalue ofϕwith corresponding eigenvectorv, then kvk=kϕ(v)k=kλvk=|λ|kvk,
hence|λ|=1.
⇐Assume conversely that|λ|=1for every eigenvalueλofϕ. Sinceϕand, hence,ϕ+are normal, by Exercise (T 10.2), they have a common orthonormal basis of eigenvectors with respect to which they are represented by diagonal matrices. Let us denote withDthe diagonal matrix representingϕand withD+=Dthe one representing ϕ+.
Let i ∈ {1, . . . ,dim(V)} be arbitrary. The i-th row of Dhas the form (0, . . . , 0,λ, 0, . . . , 0), with an eigenvalue λ ∈C. Correspondingly, the i-th column of D+ is(0, . . . , 0,λ, 0, . . . , 0)t. It follows that the product DD+ is a diagonal matrix with|λλ|=1as thei-th diagonal entry. Sinceiwas arbitrary, we get thatDD+=E, henceϕis unitary.
Exercise 4 (Simultaneous diagonalization)
LetV be a finite dimensional unitary space andϕ1, . . . ,ϕmnormal endomorphisms ofV that pairwise commute, that isϕi◦ϕj=ϕj◦ϕifor alli,j∈ {1, . . . ,m}.
Prove that there exists an orthonormal basisB= (b1, . . . ,bn)ofV consisting of simultaneous eigenvectors, that is there are complex numbersλi jfori=1, . . . ,mand j=1, . . . ,n, such that
ϕi(vj) =λi jvj for alli,j.
(a) Letλbe an eigenvalue ofϕ1andVλ(ϕ1) ={v∈V |ϕ1(v) =λv}the corresponding eigenspace. Prove that ϕi(Vλ(ϕ1))⊆Vλ(ϕ1)
for alli.
(b) Letλandµbe two different eigenvalues ofϕi. Show that the corresponding eigenspaces are orthogonal.
(c) Prove now the existence of a basis ofV with the desired properties.
Hint:Induction onm.
Solution:
a) Letv∈Vλ(ϕ1). Sinceϕi◦ϕ1=ϕ1◦ϕi, we get that
ϕ1 ϕi(v)= (ϕ1◦ϕi)(v) = (ϕi◦ϕ1)(v) =ϕi ϕ1(v)=ϕi(λv) =λϕi(v),
thereforeϕi(v)∈Vλ(ϕ1).
b) Letλandµbe different eigenvalues ofϕi. Let nowv∈Vλ(ϕi)andw∈Vµ(ϕi)be arbitrary. Sinceϕi is normal, we can apply Exercise (T 9.1) to get that
µ〈v,w〉=〈v,µw〉=〈v,ϕi(w)〉=〈ϕ+i (v),w〉=〈λv,w〉=λ〈v,w〉.
Asλ6=µ, this is only possible only when〈v,w〉=0.
2
c) We prove the statement by induction onm. The casem=1is clear, by Theorem 2.4.10.
TheInduction Step(fromm−1to m) results from (a) and (b): By (a), the eigenspaceVλ(ϕ1)is an invariant subspace ofϕifori=2, . . . ,m. It follows that these endomorphisms can be restricted toVλ(ϕ1). The restrictions are normal (with respect to the restriction of the scalar product toVλ(ϕ1)). By the induction hypothesis, there exists an orthonormal basis ofVλ(ϕ1)consisting of simultaneous eigenvectors ofϕi,i=2, . . . ,m, which are obvious eigenvectors ofϕ1(with eigenvalueλ).
Since, by (b), the eigenspaces ofϕ1w.r.t other eigenvalues are orthogonal on Vλ(ϕ1), we obtain altogether an orthonormal basis ofV with the desired properties.
Exercise 5 (Isometries and ‘skew-rotations’)
We consider the real plane R2 with the standard scalar product 〈. , .〉. Let ϕ : R2 → R2 be a linear map that is represented by a rotation matrix
A=
cosθ −sinθ sinθ cosθ
with respect to some basisB={b1,b2}. We assume thatθ6=0,π.
Show thatϕis an isometry if and only ifBis almost an orthonormal basis in the sense that
〈b1,b2〉=0 and 〈b1,b1〉=〈b2,b2〉. (So we require the lengths ofb1andb2only to be equal, not to be1.)
Solution:
LetG:=¹〈. , .〉ºBbe the matrix for〈. , .〉with respect to the basisB. We have to prove thatϕis an isometry if and only if G=
c 0
0 c
=c E2 for somec∈R.
We start by showing thatϕis an isometry if and only if the matricesGandAcommute, i.e., GA=AG.
Note thatAbeing orthogonal, we haveAt=A−1. Hence,GA=AGimplies that
〈ϕ(x),ϕ(y)〉= (A¹xºB)tG(A¹yºB) = (¹xºB)tAtGA¹yºB
= (¹xºB)tAtAG¹yºB= (¹xºB)tG¹yºB=〈x,y〉, andϕis an isometry. Conversely, ifϕis an isometry, then we have
(¹xºB)tG¹yºB=〈x,y〉=〈ϕ(x),ϕ(y)〉= (¹xºB)tAtGA¹yºB. Since this holds for all vectorsx,y, we have
G=AtGA=A−1GA, which impliesAG=GA.
It remains to prove that we haveAG=GAif and only ifG=c E2. Clearly, ifG=c E2then we haveAG=GA. Conversely, assume thatAG=GA. Since the scalar product is symmetric, so is its matrix. Hence,
G= a b
b c
, for suitablea,b,c∈R. We obtain
cosθ −sinθ sinθ cosθ
a b b c
= a b
b c
cosθ −sinθ sinθ cosθ
. This gives the following equations:
acosθ−bsinθ = acosθ+bsinθ, bcosθ−csinθ =−asinθ+bcosθ, asinθ+bcosθ= bcosθ+csinθ, bsinθ+ccosθ =−bsinθ+ccosθ.
The last equation simplifies to2bsinθ=0. Sinceθ6=0,πthis implies that b=0. Hence, the second equation simplifies tocsinθ=asinθ. Sincesinθ6=0it follows thata=c. As desired, we obtain
G= c 0
0 c
.
3