• Keine Ergebnisse gefunden

Linear Algebra II

N/A
N/A
Protected

Academic year: 2022

Aktie "Linear Algebra II"

Copied!
5
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Linear Algebra II

Exercise Sheet no. 6

Summer term 2011

Prof. Dr. Otto May 16, 2011

Dr. Le Roux Dr. Linshaw

Exercise 1 (Warm-up: possible Jordan normal forms)

Letϕ:VV be an endomorphism of a finite dimensionalC-vector spaceV. Which of the following situations can occur?

(a) i. V is 6-dimensional, the minimal polynomial ofϕis(X−2)5, and the eigenspace of2has dimension 3.

ii. V is 6-dimensional, the minimal polynomial ofϕis(X−2)(X−3)2, and the eigenspace of2has dimension 3.

(b) i. ϕhas minimal polynomial(X−2)4and there is a vectorvV with height3.

ii. ϕhas minimal polynomial(X−2)4and there is a vectorvV with height6.

iii. ϕhas minimal polynomial(X−2)4, but no vector inV has height 3.

(c) i. ϕhas characteristic polynomial(X−2)6andϕ2ϕ−id=0.

ii. ϕ2ϕ−2id=0andϕhas eigenvalues that are not real.

(d) i. V has a ϕ-invariant subspace of dimension 5, 2 is the only eigenvalue of ϕ, but there is no vV with dim(¹vº) =5.

ii. 2 is the only eigenvalue ofϕ,Vvº⊕¹bºwithdim(¹vº) =5, but the Jordan normal form forϕcontains no block of size 5.

(e) i. V can be written as the direct sum of twoϕ-invariant subspaces of dimension 4, but there is no Jordan block of size greater than 3 in the Jordan normal form forϕ.

ii. V can be written as the direct sum of twoϕ-invariant subspaces of dimension 4, and in the Jordan normal form ofϕthere is a Jordan block of size 5.

Solution:

a) i. is impossible. As the minimal polynomial is(X −2)5, there must be a Jordan block of size 5. Since the eigenspace of 2 has dimension 3, there must be 3 Jordan blocks, and that simply does not fit in a6×6- matrix.

ii. is possible. The Jordan normal form could be

2 0

2 2

3 1

3

0 3

 .

b) i. is possible. If the Jordan normal form is

2 1 0

2 1

2 1

0 2

with respect to a basis(b1, . . . ,b4), then we havedim(¹b3º) =3.

(2)

ii. is impossible. Ifψ=ϕ−2id, thenψ4=0. Hence, no¹vºcan have dimension greater than 4.

iii. is impossible. There must be a Jordan block of size 4 generated by a vectorv. But then(ϕ−2id)vmust have height 3.

c) i. is impossible. The minimal polynomial must be of the form (X −2)k with1 ≤ k ≤ 6, and also divide X2X−1. But 2 is no root of this polynomial.

ii. is impossible. Ifϕ(v) =λvwithv6=0, then(ϕ2ϕ−2id)(v) = (λ2−λ−2)v=0. This impliesλ2−λ−2=0 and, therefore,λ=−1orλ=2. So all possible eigenvalues are real.

d) i. is possible, for instance, takeϕ=2idandV=C5.

ii. is impossible. The Jordan normal form ofϕ must consist of two blocks, one of sizedim(¹bº)and one of sizedim(¹vº) =5.

e) i. is possible. The Jordan normal form could be

λ 1 0

λ λ 1

λ λ 1

λ λ 1

0 λ

with respect to a basis(b1, . . . ,b8). Thenspan(b1, . . . ,b4)andspan(b5, . . . ,b8)areϕ-invariant subspaces.

ii. is impossible. If V = V0V1and both V0 and V1areϕ-invariant, the Jordan normal form forϕ can be obtained by joining the normal forms for the restrictions ofϕtoV0andV1. So ifV0andV1can be chosen to have dimension 4, there can be no Jordan block of size 5 in the Jordan normal form ofϕ.

Exercise 2 (Commuting matrices and simultaneous diagonalization) (a) LetM1andM2be square matrices overF, and letqM

1andqM

2 be the corresponding minimal polynomials. Show that the minimal polynomial of the block matrix

M=

M1 0 0 M2

is the least common multiple of qM

1 and qM

2. (Clearly this observation generalises to block matrices with an arbitrary number of blocks).

(b) Show thatMis diagonalizable if and only if bothM1andM2are diagonalizable.

(c) LetAandBbe diagonalizablen×nmatrices overFthat commute with each other, i.e.,AB=BA.

i. Show that any eigenspace ofAis invariant underB.

ii. Show thatAandBaresimultaneously diagonalizable, i.e., there exists a matrixCsuch thatC−1ACandC−1BC are both diagonal matrices.

Solution:

a) Recall that lcm(qM

1,qM

2)is the polynomialqcharacterised by the following properties:

1. qM

1|qandqM

2|q.

2. IfqM1|pandqM2|p, thenq|p.

LetqM be the minimal polynomial ofM. Since qM(M) =

qM(M1) 0 0 qM(M2)

=0,

it follows thatqM(M1) =0andqM(M2) =0. By the definition of minimal polynomial, we haveqM1|qMandqM2|qM. Now suppose thatqM1|pandqM2|pfor some polynomialp. Thenp(M1) =0andp(M2) =0, so

p(M) =

p(M1) 0 0 p(M2)

=0.

Sincep(M) =0, it follows thatqM|p, sinceqM is the minimal polynomial ofM. ThereforeqM=lcm(qM1,qM2).

(3)

b) Suppose thatMis diagonalizable. ThenqM splits into linear factors with multiplicity one. The same is clearly true ofqM1andqM2sinceqM1|qM andqM2|qM. Conversely, if bothM1andM2are diagonalizable,qM1 andqM2split into linear factors with multiplicity one. The same clearly holds for lcm(qM1,qM2) =qM.

c) i. Letλbe an eigenvalue ofA, and letVλ be the corresponding eigenspace. GivenvVλ, note thatA(Bv) = ABv=BAv=B(Av) =Bλv=λBv. It follows thatBvVλ, as desired.

ii. Letλbe an eigenvalue ofA, and letVλ be the corresponding eigenspace. Choose a basisv1, . . . ,vmforVλ. LetU be the direct sum of the remaining eigenspaces ofA, and letvm+1, . . . ,vnbe a basis forUconsisting of eigenvectors ofA.

ClearlyC= (v1, . . . ,vn)is a basis ofFnconsisting of eigenvectors ofA. LetSbe the matrix whose columns are the vectorsv1, . . . ,vn, so thatA0=S−1ASis the diagonal matrix diag(λ, . . . ,λ,µm+1. . .µn). (Here the first mdiagonal entries areλ).

Since the eigenspaces ofAare invariant under B, it follows thatB0 =S−1BSis block diagonal of the form B1 0

0 B2

, where B1 is an m×m block, and B2 is an (n−m)×(n−m) block. Clearly B1 commutes with them×mmatrix diag(λ, . . . ,λ) =λEm. SinceAand Bcommute, it follows thatA0and B0commute, which implies thatB2commutes with the(nm)×(nm)matrix diag(µm+1, . . . ,µn). By Part (b), B1is diagonalizable, so there exists anm×mmatrix T such thatT−1B1T is a diagonal matrix D1. LetU be the n×nmatrix

T 0 0 Enm

. ThenA00= (SU)−1ASU=U−1A0Uis diagonal, and

B00= (SU)−1BSU=U−1B0U=

D1 0 0 B2

.

We now proceed in the same way with the smaller matrixB2.

Exercise 3 (Computing the Jordan normal form) Let

A:=

1 2 2 1

2 −1 −3 −2

−2 3 5 2

−1 2 2 3

 .

Find a regular matrixSand a matrixJ in Jordan normal form such thatA=SJ S−1. Hint.The characteristic polynomial ofAispA= (2−X)4.

Solution:

2 is the only eigenvalue. Thus, we consider

C:=A−2E4=

−1 2 2 1

2 −3 −3 −2

−2 3 3 2

−1 2 2 1

 .

We see thatdim(ker(C)) =2, i.e.,Ahas two linearly independent eigenvectors. This means thatJ consists of two Jordan blocks, either both of size 2, or one of size 3 and one of size 1. To see which of the two cases occurs, we computeC2. As C2=0,J will consist of two blocks of size two, i.e.,

J=

2 1 0 0

0 2 0 0

0 0 2 1

0 0 0 2

 .

To find the basis transformationS, we first need to determine a suitable basis. For that, we need to find two linearly independent vectorsu2andu4, such that

dim(¹u2º) =dim(¹u4º) =2 and ¹u2º∩¹u4º=0 .

(4)

For example, we can take

u2=

 1 0 0 0

and u4=

 0 1 0 0

 .

The other basis vectors will be

u1=Cu2=

−1 2

−2

−1

and u3=Cu4=

 2

−3 3 2

 .

The basis transformationShas as its columns the representations of the new basis vectors in terms of the old (or standard) basis. So the desired matrix is

S=

−1 1 2 0

2 0 −3 1

−2 0 3 0

−1 0 2 0

 .

Exercise 4 (Exponential function for matrices) Let

Jλ:=

λ 1

λ 1 ... ...

λ 1 λ

∈Cn×n

be a Jordan block with eigenvalueλ. For an arbitrary matrixA, we define

eA:=

X

i=0

Ai i!. (a) ComputeJ0k.

(b) ComputeJλk.Hint.Use the decompositionJλ=λEn+J0.

For the following we leave aside all the convergence issues. It is indeed safe here, but not part of linear algebra.

(c) Suppose thatAandBare matrices withAB=BA. Show thateA+B=eAeB. (d) Show thateS−1AS=S−1eAS, for an arbitrary matrixAand an invertible oneS.

(e) Prove that

eJλ=eλ Xn−1

i=0

J0i i!.

Solution:

a)

J0k=

k

z }| { 0 · · · 0 0 · · · 0

...

0 · · · 0 ...

0 · · · 0

1 0 · · · 0 0 1 · · · 0 ... ...

0 0 · · · 1 ... 0 0 · · · 0

(5)

b)

Jλk= (λEn+J0)k=

k

X

i=0

k i

λiJ0ki.

c)

eA+B= X

k=0

(A+B)k k! =

X

k=0 k

X

i=0

k i

AiBk−i k! =

X

k=0 k

X

i=0

AiBk−i i!(ki)!=

X

k=0

Ak k!·

X

i=0

Bi

i! =eAeB.

d) We have(S−1AS)k=S−1ASS−1AS· · ·S−1AS=S−1AA· · ·AS=S−1AkS. Consequently,

eS−1AS= X

i=0

(S−1AS)i

i! =S−1hX

i=0

Ai i!

i

S=S−1eAS.

e)

eJλ=eλEn+J0=eλEneJ0=eλ X

i=0

J0i i! =eλ

n−1

X

i=0

J0i i!.

Exercise 5 (Square roots)

(a) Leta0, . . . ,an−1∈Cand letNbe then×nmatrix

0 1 0

... ... ...

... 1

0 . . . 0

. When is(Pn−1

i=0aiNi)2a Jordan block?

(b) Deduce a sufficient condition ford En+N∈C(n,n)to have a square root.

(c) Deduce a sufficient condition for complex matrices to have complex square roots.

Remark: using techniques from Lie group theory, which combine differential geometry, topology and group theory, one can also obtain that the exponential map on matrices,A7→eA, is a surjection ofC(n,n)ontoG Ln(C). It follows that the equality[e12A]2=eAyields square roots for any regular matrix.

Solution:

a) First note that generally speaking(Pk

i=0ai)2=P

0≤i,jkaiajwhenevera0, . . . ,akare elements of a ring.

A:= (

n−1X

i=0

aiNi)2 = X

0≤i,jn−1

aiajNi+j

= X

0≤k≤n−1

dkNk wheredk:=

k

X

i=0

aiaki

ForAto be a Jordan block, d1 must equal1, that is,2a0a1=1. Therefore a06=0and a1= 2a1

0 are necessary.

Moreoverdkmust be zero for1<k, that is,ak=Pk−1i=2a1aiak−i

0 . These conditions are also sufficient.

b) Ifd6=0then leta20:=d (so thatd0=d), leta1:= 2a1

0, and for all1<kn−1let us define theakby induction.

ak:=Pk−1i=12aaiak−i

0 . We have(Pn−1

i=0aiNi)2=d En+N.

c) If a matrix is invertible, it has a square root. Indeed it suffices to shows that one of its JNF has a square root, by an argument similar to the previous exercise. Since the matrix is invertible,0is not a root of its characteristic polynomial, so all JNF blocks are of the formd E+Nwithd6=0. Each of those has a square root by the previous question, so has the whole JNF, by block multiplication.

Referenzen

ÄHNLICHE DOKUMENTE

Determine an orthonormal basis of V.. 〉 defines a scalar product on V is then clear: it is linear in both components, symmetric and positive definite.. d) Linearity of ψ

(b) Show that an orthogonal map in R 3 is either the identity, a reflection in a plane, a reflection in a line, the reflection in the origin, a rotation about an axis or a

Recall that, last semester in Linear Algebra I, we have shown in exercise (E14.2) that, given n distinct real numbers a

(b) Check for consistency that the change-of-basis transformation for matrices for bilinear forms are such that regularity, symmetry, self-adjointness are preserved.. (c) Let ≈ be

(b) Show that an orthogonal map in R 3 is either the identity, a reflection in a plane, a reflection in a line, the reflection in the origin, a rotation about an axis or a

Show that a topological space X is separated if and only if each filter F on X converges at most to one point.

Fachbereich Mathematik Prof.. Karl-Hermann

X ω is called the Alexandroff compactification or the one point compactification of X.... This means that we are collapsing A to