• Keine Ergebnisse gefunden

2 × 2 block representations of the Moore–Penrose inverse and orthogonal

N/A
N/A
Protected

Academic year: 2022

Aktie "2 × 2 block representations of the Moore–Penrose inverse and orthogonal"

Copied!
12
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

arXiv:2102.02101v1 [math.RA] 3 Feb 2021

2 × 2 block representations of the Moore–Penrose inverse and orthogonal

projection matrices

Bernd Fritzsche Conrad Mädler February 4, 2021

In this paper, new block representations of Moore–Penrose inverses for arbitrary complex 2×2 block matrices are given. The approach is based on block represen- tations of orthogonal projection matrices.

Keywords Moore–Penrose inverse, generalized inverses of matrices, block representations, or- thogonal projection matrices

Mathematics Subject Classification (2010) 15A09 (15A23)

1 Introduction

The aim of this paper is the following: Given an arbitrary complex (p+q)×(s+t) block matrix

E=

"

a b c d

#

(1.1) with p×s block a, we give new block representations E =α βγ δ with s×p block α of the Moore–Penrose inverse E of E as well as new block representations PR(E) = ee1121ee1222 with p×p block e11 of the orthogonal projection matrix PR(E) onto the column space R(E) of E.

The block entries should be given by expressions involving the blocksa,b,c, anddofE, where generalized inverses are build only of matrices of the block sizes, i. e., with number of rows and columns from the set{p, q, s, t}. We will see that this goal can be realized (without additional assumptions) by computation of{1}-inverses of four (non-negative Hermitian) matrices of the block sizes.

To the best of our knowledge, except the above mentioned authors Hung/Markham [12] only Miao [13], Groß [9], and Yan [27] describe explicit block representations of the Moore–Penrose inverse of arbitrary complex 2×2 partitioned matrices without making additional assumptions.

In Miao [13], a certain weighted Moore–Penrose inverse is used in the formulas for the block entries of E. In [9], Groß gives a block representation of the Moore–Penrose inverse of non- negative Hermitian 2×2 block matrices. Thus, the well-known formulas E = E(EE) and E = (EE)E can then easily be used to derive a block representation for the Moore–

Penrose inverseE of an arbitrary block matrix E. In Yan [27], a full rank factorization of E

(2)

derived from full rank factorizations of the block entries a, b, c, d is utilized to obtain a block representation ofE.

Assuming certain additional conditions, e. g., on column spaces or ranks, several authors derived block representations of the Moore–Penrose inverse of matrices, see, e. g. [4, 10, 14].

The existence of a Banachiewicz–Schur form for E is studied e. g. in [2, 21]. Furthermore, special classes of matrices were considered in this context, see, e. g. [18, 19] for so-called block k-circulant matrices. Block representations ofE involving regular transformations, e. g., per- mutations, have also been considered, see e. g. [11,15]. A representation of the Moore–Penrose inverse of a block column or block row in terms of the block entries is given in [1]. Under a certain rank additivity condition, a block representation of m×n partitioned matrices can be found in [20] as well. Several results on block representations of partitioned operators are obtained, e. g., in [6, 7, 24–26]. The here mentioned list of references on this topic is not exhaustive.

2 Notation

Throughout this paper let m, n, p, q, s, t be positive integers. We denote by Cm×n the set of all complexm×n matrices and byCn :=Cn×1 the set of all column vectors withn complex entries. We write 0m×n for the zero matrix in Cm×n and In for the identity matrix in Cn×n. Let U and V be linear subspaces of Cn. If U ∩V = {0n×1}, then we write U ⊕V for the direct sum ofU and V. LetU be the orthogonal complement ofU. IfU ⊆V, we use the notationV ⊖U :=V ∩(U). We writeR(M) andN(M) for the column space and the null space of a complex matrixM. LetM be the conjugate transpose of a complex matrixM. If M is an arbitrary complexm×nmatrix, then there exists a unique complexn×m matrixX such that the four equations

(1) M XM =M, (2) XM X =X, (3) (M X) =M X, (4) (XM)=XM (2.1) are fulfilled (see [16]). This matrixX is called the Moore–Penrose inverse of M and is desig- nated usually by the notation M. Following [3, Ch. 1, Sec. 1, Def. 1], for each M ∈ Cm×n we denote by M{j, k, . . . , ℓ} the set of all X ∈ Cn×m which satisfy equations (j),(k), . . . ,(ℓ) from the equations (1)–(4) in (2.1). Each matrix belonging to M{j, k, . . . , ℓ} is said to be a{j, k, . . . , ℓ}-inverse of M.

Remark 2.1. LetU be a linear subspace ofCn. Then there exists a unique complexn×nma- trix PU such that PUx ∈ U and x−PUx ∈ U for all x ∈ Cn. This matrix PU is called the orthogonal projection matrix onto U. If P ∈Cn×n, thenP =PU if and only if the three conditionsP2=P and P=P as well as R(P) =U are fulfilled. Furthermore, the equation PU =In−PU holds true.

Our strategy to give a block representation of the Moore–Penrose inverse E of the block matrixE given in (1.1) consists of three elementary steps:

Step (I) We consider the following factorization problem for orthogonal projection matri- ces: Find a complex (s+t)×(p+q) matrix R = rr1121rr1222

fulfilling PR(E) = ER with block entriesr11∈Cs×p,r12∈Cs×q,r21∈Ct×p, andr22∈Ct×q expressible explicitly only using the block entriesa,b,c, and dofE.

Remark 2.2. LetM ∈Cm×n and letX ∈Cn×m. In view of Remark 2.1, then:

(3)

(a) PR(M)=M X if and only if XM{1,3}.

(b) PR(M)=XM if and only ifXM{1,4}.

We constructing a suitable {1,3}-inverse R of E using:

Theorem 2.3 (Urquhart [22], see also, e. g. [8, Ch. 1, Sec. 5, Thm. 3]). Let M ∈Cm×n. (a) Let G:=M M and let G(1)G{1}. Then MG(1)M{1,2,4}.

(b) Let H:=MM and let H(1)H{1}. Then H(1)MM{1,2,3}.

Applying Theorem 2.3, we will get an explicit block representation ofPR(E)=ERin terms ofa,b,c, and d.

Step (II) Analogous to Step (I) we construct a suitable complex (s+t)×(p+q) matrix L=11 12

21ℓ22

fulfilling LE{1,4} and hencePR(E)=LE.

Step (III) With the matrices L and R we apply:

Theorem 2.4 (Urquhart [22], see e. g. [8, Ch. 1, Sec. 5, Thm. 4]). If M ∈ Cm×n, then M(1,4)M M(1,3) =M for every choice of M(1,3)M{1,3} and M(1,4)M{1,4}.

Regarding Remark 2.2, Theorem 2.4 admits the following reformulation:

Remark 2.5. Let M ∈ Cm×n and let L ∈ Cn×m and R ∈ Cn×m be such that LM =PR(M) andM R=PR(M). Then M=LM R.

Consider an additive decompositionM =U+V of anm×nmatrixM with twom×nma- trices U and V, fulfilling U V = 0m×m. In this situation, a result of Cline [5] is applica- ble to obtain a non-trivial representation of M as a sum of U and a further matrix. By Hung/Markham [12] a decomposition E =U+V with U = ac00 and V = 00cd is used in this way to derive a block representation ofEinvolving only Moore–Penrose inverses of block size matrices.

Regarding Remarks 2.2 and 2.1 as well as (2.1), the orthogonal projection matrixQ:=PR(U) fulfills U Q =U UU = U and V Q = (QV) = (UU V) = 0m×n. Consequently, M Q=U and M(InQ) = V. Conversely, given M ∈ Cm×n and an orthogonal projection matrix Q∈Cn×n, then it is readily checked that U := M Qand V :=M(InQ) fulfill M =U +V andU V = 0m×m. Thus, every decompositionM =U+V withU V= 0m×m can be written as M = M Q+M(InQ) with some orthogonal projection matrix Q ∈ Cn×n occurring on the right-hand side and vice versa. Although not explicitly using Cline’s theorem, our investigations involve an analogous decomposition, namelyE=PSE+ (Ip+q−PS)Ewith the orthogonal projection matrixPS onto the linear subspace S spanned by the firsts columns ofE occurring on the left-hand side (see Lemma 3.2).

3 Main results

We consider a complex (p+q)×(s+t) matrixE. Let (1.1) be the block representation of E withp×sblocka. Setting

Y := [a, b], Z := [c, d], S:=

"

a c

#

, T :=

"

b d

#

(3.1)

(4)

then

E=

"

Y Z

#

, E= [S, T]. (3.2)

Let

µ:=aa+bb, σ:=aa+cc, (3.3)

ζ :=cc+dd, τ :=bb+dd,

ρ:=ca+db, λ:=ab+cd. (3.4)

In view of (3.1), then

µ=Y Y, σ =SS, (3.5)

ζ =ZZ, τ =TT, (3.6)

ρ=ZY, λ=ST. (3.7)

Chooseµ(1)µ{1} and σ(1)σ{1}. Regarding (3.5), then Theorem 2.3 shows that

Y(1,2,4):=Yµ(1), S(1,2,3) :=σ(1)S (3.8)

fulfill

Y(1,2,4)Y{1,2,4}, S(1,2,3)S{1,2,3}. (3.9)

Let

φ:=c−(ca+db(1)a, ψ:=d−(ca+db(1)b, (3.10) η:=b(1)(ab+cd), θ:=d(1)(ab+cd). (3.11) Because of (3.8), (3.7), (3.1), (3.4), (3.10), and (3.11), then

V := [φ, ψ] and W :=

"

η θ

#

(3.12) admit the representations

Z(Is+tY(1,2,4)Y) =ZZYµ(1)Y =Zρµ(1)Y

= [c, d]−ρµ(1)[a, b] = [φ, ψ] =V (3.13) and

(Ip+qSS(1,2,3))T =T(1)ST =T(1)λ

=

"

b d

#

"

a c

#

σ(1)λ=

"

η θ

#

=W. (3.14)

Using (3.13), (3.14), (3.9), Remarks 2.2 and 2.1, and [R(Y)]=N(Y), we can infer

V =ZPN(Y), W =P[R(S)]T. (3.15)

(5)

Let

ν :=φφ+ψψ, ω:=ηη+θθ. (3.16)

In view of (3.12), (3.15), and Remark 2.1 then

ν =V V =ZPN(Y)Z =V Z, ω=WW =TP[R(S)]T =TW. (3.17) Chooseν(1)ν{1} andω(1)ω{1}. Regarding (3.17), then Theorem 2.3 shows that

V(1,2,4):=Vν(1) and W(1,2,3) :=ω(1)W (3.18)

fulfill

V(1,2,4)V{1,2,4} and W(1,2,3)W{1,2,3}. (3.19)

Obviously, we haveµ∈Cp×p

,σ∈Cs×s

,ζ ∈Cq×q

,τ ∈Ct×t

,ν∈Cq×q

,ω ∈Ct×t

,ρ∈Cq×p, and λ∈Cs×t, whereCn×n

denotes the set of all non-negative Hermitian complexn×nmatrices.

Remark 3.1. Let

L:=h(Is+tV(1,2,4)Z)Y(1,2,4), V(1,2,4)i, R:=

"

S(1,2,3)(Ip+qT W(1,2,3)) W(1,2,3)

#

. (3.20) Regarding (3.20), (3.18), (3.8), (3.7), (3.1), and (3.12), then

L=h(Is+tVν(1)Z)Yµ(1), Vν(1)i=h(YVν(1)ZY(1), Vν(1)i

=h(YVν(1)ρ)µ(1), Vν(1)i= [Yµ(1),0(s+t)×q] + [−Vν(1)ρµ(1), Vν(1)]

=Yµ(1)[Ip,0p×q] +Vν(1)[−ρµ(1), Iq]

=

"

a b

#

µ(1)[Ip,0p×q] +

"

φ ψ

#

ν(1)[−ρµ(1), Iq] =

"

11 12 21 22

# ,

where

12:=φν(1), 11:= (a12ρ)µ(1), 22:=ψν(1), 21:= (b22ρ)µ(1), (3.21) and

R=

"

σ(1)S(Ip+qT ω(1)W) ω(1)W

#

=

"

σ(1)(SST ω(1)W) ω(1)W

#

=

"

σ(1)(Sλω(1)W) ω(1)W

#

=

"

σ(1)S 0t×(p+q)

# +

"

−σ(1)λω(1)W ω(1)W

#

=

"

Is

0t×s

#

σ(1)S+

"

−σ(1)λ It

#

ω(1)W

=

"

Is 0t×s

#

σ(1)[a, c] +

"

−σ(1)λ It

#

ω(1), θ] =

"

r11 r12 r21 r22

# ,

where

r21:=ω(1)η, r11:=σ(1)(aλr21), r22:=ω(1)θ, r12:=σ(1)(cλr22). (3.22)

(6)

Lemma 3.2. Let E := R(E), let S := R(S), and let W := R(W). Then W = E ⊖S and PE =PS +PW =ER.

Proof. We first check W =E ∩(S). Because of (3.19), (3.9), and Remark 2.2(a), we have PW =W W(1,2,3) andPS =SS(1,2,3). According to Remark 2.1, hence Ip+qSS(1,2,3) =PS. By virtue of (3.14), then W = PST follows. From Remark 2.1 we know R(PS) = S. Consequently,W ⊆S. Regarding (3.2) and (3.14), we haveS ⊆E and

W = (Ip+qSS(1,2,3))T =TSS(1,2,3)T = [S, T]

"

−S(1,2,3)T It

#

=E

"

−S(1,2,3)T It

# ,

implyingW ⊆E. Thus,W ⊆E∩(S) is proved. Now we consider an arbitraryw∈E∩(S).

Then w ∈ E; so there exists some v ∈ Cs+t with w = Ev. Let v = xy

be the block representation ofvwithx∈Cs. Regarding (3.2), thenw=Sx+T y. In view ofw∈S and Sx ∈S, furthermore PSw= w and PSSx = 0(p+q)×1. Taking additionally into account W =PST, we obtain then

w=PSw=PS(Sx+T y) =PST y=W y,

implying w ∈ W. Thus, we have also shown E ∩(S) ⊆ W. Therefore, W = E ∩(S) holds true. Since S ⊆ E, hence W = E ⊖S. Consequently, PW = PE −PS follows (see, e. g. [23, Thm. 4.30(c)]). Thus,PE =PS+PW. Taking additionally into accountPS =SS(1,2,3) andPW =W W(1,2,3) as well as (3.14), (3.2), and (3.20), then we can conclude

PE =SS(1,2,3)+W W(1,2,3) =SS(1,2,3)+ (Ip+qSS(1,2,3))T W(1,2,3)

=SS(1,2,3)(Ip+qT W(1,2,3)) +T W(1,2,3)= [S, T]

"

S(1,2,3)(Ip+qT W(1,2,3)) W(1,2,3)

#

=ER.

The following result can be proved analogously. We omit the details.

Lemma 3.3. Let E˜ := R(E), let Y˜ := R(Y), and let V˜ := R(V). Then V˜ = ˜E ⊖Y˜ and PE˜ =PY˜+PV˜ =LE.

Remark3.4. From Lemma 3.2 and Remark 2.2(a) we can inferRE{1,3}, whereas Lemma 3.3 and Remark 2.2(b) yieldLE{1,4}.

Now we obtain the announced block representations of orthogonal projection matrices.

Proposition 3.5. Let E be a complex (p+q)×(s+t) matrix and let (1.1) be the block rep- resentation of E with p×s block a. Let S be given by (3.1). Let σ and λ be given by (3.3) and (3.4). Let σ(1)σ{1}. Let η, θ and W be given by (3.11) and (3.12). Let ω be given by (3.16)and let ω(1)ω{1}. Then

PR(E)=(1)S+W ω(1)W =

(1)a+ηω(1)η (1)c+ηω(1)θ (1)a+θω(1)η (1)c+θω(1)θ

.

Proof. In the proof of Lemma 3.2, we have already shown PR(E) = SS(1,2,3) +W W(1,2,3). Taking additionally into account (3.8), (3.18), (3.1), and (3.12), the assertions follow.

Now we are able to prove a 2×2 block representation of the Moore–Penrose inverse.

(7)

Theorem 3.6. Let E be a complex (p+q)×(s+t) matrix and let (1.1) be the block repre- sentation of E with p×s block a. Let Y, S be given by (3.1). Let µ, σ and ρ, λ be given by (3.3)and (3.4). Let µ(1)µ{1} and let σ(1)σ{1}. Let φ, ψ and η, θ be given by (3.10) and (3.11). Let V and W be given by (3.12). Let ν and ω be given by (3.16). Let ν(1)ν{1} and let ω(1)ω{1}. Then

E=LER=

"

α β γ δ

#

= (Y−Vν(1)ρ)µ(1)(1)(S−λω(1)W)+(Y−Vν(1)ρ)µ(1)(1)W +Vν(1)(1)(Sλω(1)W) +Vν(1)(1)W with

α:=11ar11+11br21+12cr11+12dr21 β :=11ar12+11br22+12cr12+12dr22 γ :=21ar11+21br21+22cr11+22dr21 δ :=21ar12+21br22+22cr12+22dr22 where, for each j, k∈ {1,2}, the matrices jk and rjk are given by (3.21) and (3.22), resp.

Proof. According to Lemmas 3.3 and 3.2, we have PR(E) =LE and PR(E) =ER. Thus, we can apply Remark 2.5 to obtain E = LER. Using Remark 3.1 and (1.1), we furthermore obtain

LER=h(YVν(1)ρ)µ(1), Vν(1)i

"

a b c d

# "

σ(1)(Sλω(1)W) ω(1)W

#

= (YVν(1)ρ)µ(1)(1)(Sλω(1)W) + (YVν(1)ρ)µ(1)(1)W

+Vν(1)(1)(Sλω(1)W) +Vν(1)(1)W as well as

LER=

"

11 12 21 22

# "

a b c d

# "

r11 r12 r21 r22

#

=

"

α β γ δ

# .

4 Examples for consequences

In this section, we give some examples of applications of the block representations of orthogonal projection matrices and Moore–Penrose inverses given in Proposition 3.5 and Theorem 3.6. In order to avoid lengthy formulas, we give only hints for computations.

Example 4.1. Let N be a positive integer and let U and V be two complementary linear subspaces ofCN, i. e., the subspacesU andV fulfillU ⊕V =CN. Then there exists a unique complex N×N matrix PU,V such that PU,Vx = u for all x ∈ CN, where x = u+v is the unique representation of x with u ∈ U and v ∈ V. This matrix PU,V is called the oblique projection matrix onU alongV and admits the representations

PU,V = (PVPU)= [(IN −PV)PU], (4.1) see [8] or also [3, Ch. 2, Sec. 7, Ex. 60, formula (80)]. Assume that N = p+q and that U =R(E) andV =R(F) for two matricesE∈C(p+q)×(s+t)andF∈C(p+q)×(m+n)with block representations (1.1) andF=g he f, whereais ap×sblock andeis ap×mblock. Then (4.1) together with Proposition 3.5 and Theorem 3.6 could be used to obtain a block representation ofPU,V in terms of a, b, c, dand e, f, g, h.

(8)

Example 4.2. Because U ⊕(U) = Cn holds true for every linear subspace U of Cn, the Moore–Penrose inverse of matrices is a special case of the uniquely determined {1,2}-inverse with simultaneously prescribed column space and null space. More precisely, if M ∈ Cm×n and linear subspacesU of Cm and V ofCn withR(M)⊕U =Cm andN(M)⊕V =Cnare given, then there exists a unique complexn×m matrixX such that the four conditions

M XM =M, XM X =X, R(X) =V, and N(X) =U are fulfilled. This matrixX is denoted byMU(1,2),V and admits the representations

MV(1,2),U =PV,N(M)M(1)PR(M),U = (PVPN(M))M(1)(P[R(M)]PU) (4.2) with every M(1)M{1} (see, e. g. [3, Ch. 2, Sec. 6, Thm. 12]). In particular, (4.2) is valid forM(1)=M. Furthermore,M=M[N(1,2)

(M)],[R(M)]. Example 4.1 could be used to obtain a block representation ofMV(1,2),U, if the subspacesU andV are given as column spaces of certain matrices, partitioned accordingly to a block representation ofM, and if a matrixM(1)M{1}

the corresponding block representation of which is known. (In particular, forM(1)=M one can use Theorem 3.6.)

5 An alternative approach

In this final section, we give alternative representations of the matricesL and R occurring in Theorem 3.6 and Lemmas 3.3 and 3.2. Utilizing these representations, further block represen- tations of the Moore–Penrose inverseE could possibly be obtained, in particular, in the case of E satisfying additional conditions. We will not pursue this direction any further here. We continue to use the notations given above.

Lemma 5.1 (Rohde [17], see, e. g. [8, Ch. 5, Sec. 2, Ex. 10(a)]). Let M ∈C(p+q)×(p+q)

and

let M =mm1121 mm1222

be the block representation of M with p×p block m11. Let m(1)11m11{1}

and let ς :=m22m21m(1)11m12. Let ς(1)ς{1}. Then

M(1) :=

m(1)11 +m(1)11m12ς(1)m21m(1)11 −m(1)11m12ς(1)

−ς(1)m21m(1)11 ς(1)

belongs to M(1)M{1}.

Remark 5.2. Regarding (3.17), (3.13), (3.14), (3.8), (3.6), and, (3.7), we can infer

ν =V Z=Z(Is+tY(1,2,4)Y)Z =Z(Is+tYµ(1)Y)Z =ζρµ(1)ρ (5.1) and

ω=TW =T(Ip+qSS(1,2,3))T =T(Ip+q(1)S)T =τλσ(1)λ. (5.2) Lemma 5.3. Let

G:=EE and H:=EE. (5.3)

(9)

Let µ(1)µ{1} and ν(1)ν{1} and let σ(1)σ{1} and ω(1)ω{1}. Then the matrices

G(1):=

µ(1)+µ(1)ρν(1)ρµ(1) −µ(1)ρν(1)

−ν(1)ρµ(1) ν(1)

(5.4)

and

H(1):=

"

σ(1)+σ(1)λω(1)λσ(1)σ(1)λω(1)

−ω(1)λσ(1) ω(1)

#

(5.5) fulfill G(1)G{1} and H(1)H{1}.

Proof. Clearly, G∈C(p+q)×(p+q)

and H ∈C(s+t)×(s+t)

. Regarding (3.2), (3.5), and (3.6), we have G = YZ[Y, Z] = Y YZYY ZZZ

= µ ρρ ζ and H = ST

[S, T] = STS SS TTT

= λσ λτ

. Taking additionally into account Remark 5.2, thus from Lemma 5.1 the assertions immediately follow.

Finally, in the following result, we not only get new representations for the matrices L and R occurring in Theorem 3.6 and Lemmas 3.3 and 3.2, but also obtain their belonging to the setE{1,2,4} and E{1,2,3}, resp., thereby improving Remark 3.4.

Lemma 5.4. The matrices L and R admit the representations L=EG(1) and R=H(1)E and fulfill LE{1,2,4} and RE{1,2,3}.

Proof. Using (3.9) and Remarks 2.2 and 2.1, we have (Y(1,2,4)Y) = P

R(Y) = PR(Y) = Y(1,2,4)Y and, analogously, (SS(1,2,3)) = SS(1,2,3). Taking additionally into account (3.13), (3.14), (3.8), and (3.7), we thus can conclude

V= (Is+tY(1,2,4)Y)Z =ZYµ(1)Y Z =ZYµ(1)ρ (5.6) and

W=T(Ip+qSS(1,2,3)) =TT(1)S=Tλσ(1)S. (5.7) Regarding (3.2), (5.4), (5.5), (5.1), (5.2), (5.6), (5.7), and Remark 3.1, we obtain

EG(1) = [Y, Z] "

Ip 0q×p

#

µ(1)[Ip,0p×q] +

"

−µ(1)ρ Iq

#

ν(1)[−ρµ(1), Iq]

!

=Yµ(1)[Ip,0p×q] + (ZYµ(1)ρ(1)[−ρµ(1), Iq] =L

(5.8) and

H(1)E= "

Is 0t×s

#

σ(1)[Is,0s×t] +

"

−σ(1)λ It

#

ω(1)[−λσ(1), It]

! "

S T

#

=

"

Is 0t×s

#

σ(1)S+

"

−σ(1)λ It

#

ω(1)(Tλσ(1)S) =R.

(5.9)

According to Lemma 5.3, we have G(1)G{1} and H(1)H{1}. Taking additionally into account (5.3), (5.8), and (5.9), thenLE{1,2,4}andRE{1,2,3}follow from Theorem 2.3.

Observe that Lemma 5.4 in connection with Theorem 3.6 yields a factorization E =LER with particular matricesLE{1,2,4}andRE{1,2,3}. This gives a special factorization of the kind mentioned in Urquhart’s result (Theorem 2.4), whereby all matrices can be expressed explicitly in terms of the block entries a, b, c, dof the given matrix E.

(10)

References

[1] Baksalary, J.K. and O.M. Baksalary: Particular formulae for the Moore-Penrose in- verse of a columnwise partitioned matrix. Linear Algebra Appl., 421(1):16–23, 2007.

https://doi.org/10.1016/j.laa.2006.03.031.

[2] Baksalary, J.K. and G.P.H. Styan: Generalized inverses of partitioned matrices in Banachiewicz-Schur form. Vol. 354, pp. 41–47. 2002.

https://doi.org/10.1016/S0024-3795(02)00334-8, Ninth special issue on linear algebra and statistics.

[3] Ben-Israel, A. and T.N.E. Greville: Generalized inverses, vol. 15 of CMS Books in Math- ematics/Ouvrages de Mathématiques de la SMC. Springer-Verlag, New York, second ed., 2003. Theory and applications.

[4] Castro-González, N., M.F. Martínez-Serrano, and J. Robles: Expressions for the Moore- Penrose inverse of block matrices involving the Schur complement. Linear Algebra Appl., 471:353–368, 2015. https://doi.org/10.1016/j.laa.2015.01.003.

[5] Cline, R.E.:Representations for the generalized inverse of sums of matrices. J. Soc. Indust.

Appl. Math. Ser. B Numer. Anal., 2:99–114, 1965.

[6] Deng, C.Y. and H.K. Du: Representations of the Moore-Penrose inverse of 2 × 2 block operator valued matrices. J. Korean Math. Soc., 46(6):1139–1150, 2009.

https://doi.org/10.4134/JKMS.2009.46.6.1139.

[7] Deng, C.Y. and H.K. Du: Representations of the Moore-Penrose inverse for a class of 2-by-2 block operator valued partial matrices. Linear Multilinear Algebra, 58(1-2):15–26, 2010. https://doi.org/10.1080/03081080801980457.

[8] Greville, T.N.E.: Solutions of the matrix equation XAX = X, and relations be- tween oblique and orthogonal projectors. SIAM J. Appl. Math., 26:828–832, 1974.

https://doi.org/10.1137/0126074.

[9] Groß, J.:The Moore-Penrose inverse of a partitioned nonnegative definite matrix. Vol. 321, pp. 113–121. 2000. https://doi.org/10.1016/S0024-3795(99)00073-7, Linear algebra and statistics (Fort Lauderdale, FL, 1998).

[10] Hartwig, R.E.:Rank factorization and Moore-Penrose inversion. Indust. Math., 26(1):49–

63, 1976.

[11] He, C.N.:General forms for Moore-Penrose inverses of matrices by block permutation. J.

Nat. Sci. Hunan Norm. Univ., 29(4):1–5, 2006.

[12] Hung, C.H. and T.L. Markham: The Moore-Penrose inverse of a par- titioned matrix M = (AB DC). Linear Algebra Appl., 11:73–86, 1975.

https://doi.org/10.1016/0024-3795(75)90118-4.

[13] Miao, J.M.:General expressions for the Moore-Penrose inverse of a2×2block matrix. Lin- ear Algebra Appl., 151:1–15, 1991. https://doi.org/10.1016/0024-3795(91)90351-V.

(11)

[14] Mihailović, B., V. Miler Jerković, and B. Malešević:Solving fuzzy linear systems using a block representation of generalized inverses: the Moore-Penrose inverse. Fuzzy Sets and Systems, 353:44–65, 2018. https://doi.org/10.1016/j.fss.2017.11.007.

[15] Milovanović, G.V. and P.S. Stanimirović:On Moore-Penrose inverse of block matrices and full-rank factorization. Publ. Inst. Math. (Beograd) (N.S.), 62(76):26–40, 1997.

[16] Penrose, R.:A generalized inverse for matrices. Proc. Cambridge Philos. Soc., 51:406–413, 1955.

[17] Rohde, C.A.: Generalized inverses of partitioned matrices. J. Soc. Indust. Appl. Math., 13:1033–1035, 1965.

[18] Smith, R.L.: Moore-Penrose inverses of block circulant and block k-circulant matrices. Linear Algebra Appl., 16(3):237–245, 1977.

https://doi.org/10.1016/0024-3795(77)90007-6.

[19] Tang, S. and H.Z. Wu: The Moore-Penrose inverse and the weighted Drazin inverse of block k-circulant matrices. J. Hefei Univ. Technol. Nat. Sci., 32(9):1442–1444, 1448, 2009.

[20] Tian, Y.: The Moore-Penrose inverses of m × n block matrices and their applications. Linear Algebra Appl., 283(1-3):35–60, 1998.

https://doi.org/10.1016/S0024-3795(98)10049-6.

[21] Tian, Y. and Y. Takane: More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms. Linear Algebra Appl., 430(5-6):1641–1655, 2009.

https://doi.org/10.1016/j.laa.2008.06.007.

[22] Urquhart, N.S.: Computation of generalized inverse matrices which satisfy specified con- ditions. SIAM Rev., 10:216–218, 1968. https://doi.org/10.1137/1010035.

[23] Weidmann, J.: Linear operators in Hilbert spaces, vol. 68 of Graduate Texts in Mathe- matics. Springer-Verlag, New York-Berlin, 1980. Translated from the German by Joseph Szücs.

[24] Xu, Q.: Moore-Penrose inverses of partitioned adjointable operators on Hilbert C-modules. Linear Algebra Appl., 430(11-12):2929–2942, 2009.

https://doi.org/10.1016/j.laa.2009.01.003.

[25] Xu, Q., Y. Chen, and C. Song: Representations for weighted Moore-Penrose in- verses of partitioned adjointable operators. Linear Algebra Appl., 438(1):10–30, 2013.

https://doi.org/10.1016/j.laa.2012.08.002.

[26] Xu, Q. and X. Hu: Particular formulae for the Moore-Penrose inverses of the par- titioned bounded linear operators. Linear Algebra Appl., 428(11-12):2941–2946, 2008.

https://doi.org/10.1016/j.laa.2008.01.021.

[27] Yan, Z.Z.:New representations of the Moore-Penrose inverse of2×2block matrices. Linear Algebra Appl., 456:3–15, 2014. https://doi.org/10.1016/j.laa.2012.08.014.

(12)

Universität Leipzig

Fakultät für Mathematik und Informatik PF 10 09 20

D-04009 Leipzig Germany

fritzsche@math.uni-leipzig.de maedler@math.uni-leipzig.de

Referenzen

ÄHNLICHE DOKUMENTE

Just a month after the victory at Lepanto, Pope Pius V summoned Giorgio Vasari to Rome, asking him to produce fresco scenes illustrating the battle on the walls of the Sala Regia,

In the second part some effective representations of the open subsets of the real numbers are introduced and

in this study of Conrad’s representation of Otherness conducted through an examination of three of his works – The Nigger of the Narcissus, Lord Jim, and Under Western

Como parte del resultado de ese análisis, que empleo, desde una perspectiva interdisciplinaria (KINCHELOE 2001), como estrategia de interpretación de datos cualitativos (VASILACHIS

Note: twisting, even in this toy example, is non-trivial and affects the 2-representation theory.... These

Monoidal categories, module categories R ep(G ) of finite groups G , module categories of Hopf algebras, fusion or modular tensor categories, Soergel bimodules S , categorified

FIGURE 14: EXCESS DEMAND FUNCTION WITH A TARIFF.

Language comprehension, in its most basic form, includes a set of processes that transforms exter- nal linguistic representations, such as words, phrases, sentences,