• Keine Ergebnisse gefunden

Least upper bound of truncation error of low‑rank matrix approximation algorithm using QR decomposition with pivoting

N/A
N/A
Protected

Academic year: 2022

Aktie "Least upper bound of truncation error of low‑rank matrix approximation algorithm using QR decomposition with pivoting"

Copied!
23
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

ORIGINAL PAPER

Least upper bound of truncation error of low‑rank matrix approximation algorithm using QR decomposition

with pivoting

Haruka Kawamura1 · Reiji Suda1

Received: 31 May 2020 / Revised: 24 January 2021 / Accepted: 1 February 2021 / Published online: 24 February 2021

© The Author(s) 2021

Abstract

Low-rank approximation by QR decomposition with pivoting (pivoted QR) is known to be less accurate than singular value decomposition (SVD); however, the calcula- tion amount is smaller than that of SVD. The least upper bound of the ratio of the truncation error, defined by ‖ABC2 , using pivoted QR to that using SVD is proved to be √

4k1

3 (nk) +1 for Am×n(mn) , approximated as a product of Bm×k and Ck×n in this study.

Keywords Error analysis · Pivoting · QR decomposition · Singular values Mathematics Subject Classification 65F55 · 15A45

1 Introduction

1.1 Low‑rank approximation

Low-rank matrix approximation involves approximating a matrix by a matrix whose rank is less than that of the original matrix. Let Am×n ; then, a rank k approxima- tion of A is given by

where Bm×k and Ck×n . Low-rank matrix approximation appears in many applications such as data mining [5] and machine learning [14]. It also plays an important role in tensor decompositions [12].

ABC

* Reiji Suda

reiji@is.s.u-tokyo.ac.jp Haruka Kawamura kawamulahaluka@gmail.com

1 The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

(2)

This paper discusses truncation errors of low-rank matrix approximation using QR decomposition with pivoting, or pivoted QR. In this study, rounding errors are not considered, and the norm used is basically 2-norm. Am×n (without loss of generality, we assume that mn ) is approximated by a product of Bm×k and Ck×n , and the truncation error is defined by ‖ABC2.

It is well-known that for any matrix Am×n ( mn ), there are orthogonal matrices Um×m and Vn×n and a diagonal matrix 𝛴n×n with nonnegative diagonal elements that satisfy

This is a singular value decomposition (SVD) of A. We define 𝜎

i(A) for i=1 , 2, ..., n, satisfying

and assume that 𝜎

1(A)≥𝜎

2(A)≥⋯≥𝜎

n(A)≥0 without loss of generality. The 𝜎 values are singular values of A. A has rank k if and only if 𝜎 i

k(A)>0=𝜎

k+1(A) . Let

Then,

holds [8]. Therefore, this is an A’s rank-k approximation whose 2-norm of truncation error is the smallest. We define the truncation error of low-rank approximation by SVD as

The amount of computation required to calculate SVD is O(nmmin(n,m)).

Pivoted QR was proposed by Golub in 1965 [7]. Because the amount of computa- tion required to calculate the low-rank approximation by pivoted QR is O(nmk), it is cheaper than SVD and hence useful in many applications such as solving rank- deficient least squares problems [2]. It consists of QR decomposition and pivoting.

For any matrix A, there exist Qm×n and an upper triangular matrix Rn×n that satisfy A=QR and QTQ=In . This is a QR decomposition of A. We use pivoting to determine the permutation matrix 𝛱grd and apply the QR decomposition algorithm to A𝛱grd . The subscript grd signifies the greedy method, as explained previously.

Hereafter, we redefine QR as a QR decomposition of A𝛱grd=QR . Let Q and R be partitioned as

A=U (𝛴

O )

VT.

diag(𝜎

1(A),𝜎

2(A),…,𝜎n(A)) =𝛴,

𝛴k=diag(𝜎

1(A),𝜎

2(A),…,𝜎k(A)).

rank(X)min≤kAX2=����

AU

𝛴

k O

O O

VT����

2

=𝜎

k+1(A)

SVDk(A) =𝜎

k+1(A).

(3)

where Q1km×k and R1kk×k. Then, we can approximate A to Q1k(

R1k R2k) 𝛱T

grd and

holds. We define the truncation error of low-rank approximation by pivoted QR as

In this study, the greedy method is used to make ‖R3k2 small in pivoting. Pivoting is performed such that the elements in R= (rij) satisfy the following inequalities [1, p.103]

Condition (1) is not used to analyze the error for l=k+1 , k+2 , ..., n−1.

The greedy method of pivoting is not always optimal. QR decompositions of A𝛱RR , where 𝛱RR is chosen such that RRR has a small lower right block and where QRRRRR is a QR decomposition of A𝛱RR , are called rank-revealing QR (RRQR). The following theorem was shown by Hong et al. in 1992 [9].

Theorem 1 Let mn>k, and Am×n. Then, there exists a permutation matrix 𝛱n×n such that the diagonal blocks of R=

(R1 R2 O R3 )

, the upper triangular factor of the QR decomposition of A𝛱 with R1k×k, satisfy the following inequality:

Finding the optimal permutation matrix is not practical from the viewpoint of computational complexity.

1.2 Truncation error of pivoted QR

Pivoted QR sometimes results in a large truncation error. A well-known example was shown by Kahan, whose work we do not reproduce here [10]. In 1968, Faddeev et al. [6] showed that

Furthermore,

Q=(

Q1k Q2k) ,R=

(R1k R2k O R3k )

AQ1k

R1k R2k𝛱T

grd2=‖R3k2

pivotQRk(A) =‖R3k2.

(1) r2ll

j i=l

r2ij (l=1, 2,…,n−1, j=l+1,l+2,…,n).

R32≤√

k(nk) +min(k,nk)𝜎

k+1(A).

pivotQRn−1(A)≤

√4n+6n−1

3 SVDn−1(A).

(4)

holds [3].

However, in a survey in 2017, it was stated that “very little is known in theory about its behaviour” [13, p. 2218] with regard to pivoted QR, thus there is still room for further research on pivoted QR.

Our previous work showed that the least upper bound of the ratio of the trunca- tion error of pivoted QR to that of SVD is √

4n−1+2

3 in case an m×n ( mn ) matrix is approximated to a matrix whose rank is n−1 , i.e., for k=n−1 [11]. The tight upper bound for all k is proved in the rest of this paper.

We assume that all matrices and vectors in this paper are real numbers; however, we can easily extend the discussion in this paper to complex numbers, and the same results can be obtained.

2 Preliminaries

In this section, we define the notations and examine the basic properties to analyze the truncation errors. First, we introduce the concept resi.

Proposition 1 [1, p. 16] For Am×n, there exists Xn×m that satisfies

and X is uniquely determined by the four conditions.

Definition 1 For Am×n ( mn ), the generalized inverse of A is defined by Xn×m that satisfies the four conditions in Proposition 1 and is denoted by A.

The following notation is closely related to the truncation error of pivoted QR.

Definition 2 Let Am×n ( mn ) and Bm×l . We define resi(A,B) as

We denote the inner product of two vectors x and y as (x,y). Example 1 For xn and yn , if x0 , then the following holds:

The following lemma will be used to identify resi. pivotQRk(A)≤ n

4k+6k−1

3 SVDk(A)

AXA=A,XAX=X,(AX)T =AX,(XA)T=XA

resi(A,B) =BAAB.

resi(x,y) =y−(x,y)

x2x.

(5)

holds.

Proof If resi(A,B) =BAX holds, then

holds. If ATAXATB=O holds, then

holds. ◻

Lemma 2 [1, p. 5] Let Am×n(m≥n), bm, and xn. ‖bAx‖≤‖bAyholds for any yn if and only if AT(Ax−b) =0 holds.

Using Lemmas 1 and 2, we can obtain the following lemma.

Lemma 3 Let Am×n(m≥n), bm, and xn. ‖bAx‖≤‖bAy holds for any yn if and only if resi(A,b) =bAx holds.

Lemma 4 Let mn>k, Am×n, and Bm×l. Let A be partitioned as

where A1km×k. Then,

holds.

Proof From the definition of resi , we can see that

and

hold where X=A

1kA2k , Y =A

1kB and Z=resi(A1k,A2k)resi(A1k,B) . Thus, holds from (2), (3), and (4). Lemma 1 proves

ATAXATB=O⇔resi(A,B) =BAX

ATAXATB= −ATresi(A,B) = (ATAAAT)B

= (AT(AA)TAT)B= ((AAA)TAT)B=O

resi(A,B) −B+AX=AXAAB=AAAXAAB

= (AA)TAX− (AA)TB=A†T(ATAXATB) =O

A=(

A1k A2k)

resi(A,B) =resi(resi(A1k,A2k), resi(A1k,B))

(2) resi(A1k,A2k) =A2kA1kX,

(3) resi(A1k,B) =BA1kY

(4) resi(resi(A1k,A2k), resi(A1k,B)) =resi(A1k,B) −resi(A1k,A2k)Z

(5) resi(resi(A1k,A2k), resi(A1k,B)) =BA1kYA2kZ+A1kXZ

(6)

from (2),

from (3), and

from (4). We can see that

from (5), (6), and (7). We can see that

from (2), (4), (8), and (9). Then, (9) and (10) can be combined as

Next, (5) can be rewritten as

From this and (11), we have

Application of Lemma 1 to this proves the lemma. ◻

QR decomposition and resi have the following relation. Note that QR in this lemma is without pivoting.

Lemma 5 Let mn>l, Am×n, and A=QR be a QR decomposition parti- tioned as

(6) AT1kA2k=AT1kA1kX,

(7) AT

1kB=AT

1kA1kY

(8) resi(A1k,A2k)T(resi(A1k,B) −resi(A1k,A2k)Z) =O

(9) AT

1kresi(resi(A1k,A2k), resi(A1k,B))

= AT1k(B−A1kYA2kZ+A1kXZ)

= O

(10) AT

2kresi(resi(A1k,A2k), resi(A1k,B))

= (A2kA1kX)Tresi(resi(A1k,A2k), resi(A1k,B))

= resi(A1k,A2k)T(resi(A1k,B) −resi(A1k,A2k)Z)

= O

(11) (AT1k

AT

2k

)

resi(resi(A1k,A2k), resi(A1k,B))

= ATresi(resi(A1k,A2k), resi(A1k,B)) =O.

resi(resi(A1k,A2k), resi(A1k,B)) =BA

(YXZ Z

) .

AT (

BA

(YXZ Z

))

=O.

(7)

where A1lm×l,Q1lm×l,R1ll×l. If rank(A1l) =l holds, then

holds.

Proof We have

Let

Then, we have

Furthermore,

holds. Application of Lemma 1 to this proves the lemma. ◻ Then, we return to pivoted QR. Let

where A1km×k . From Lemma 5, we can see that

for l=1 , 2, ..., k and j=l+1 , l+2 , ..., n and

if rank(A1k) =k holds. The last equation suggests that, as long as rank(A1k) =k holds, the value of pivotQRk(A) is determined only from A1k and A2k , or equivalently from 𝛱

grd , and is independent of how (or in what algorithm) the QR decomposition is computed.

A=( A1l A2l)

, Q=(

Q1l Q2l)

, R=

(R1l R2l O R3l )

resi(A1l,A2l) =Q2lR3l

A1l=Q1lR1l, A2l=Q1lR2l+Q2lR3l.

X=R−11lR2l.

Q2lR3l=A2lA1lX.

AT

1l(A2lA1lX) =AT

1lQ2lR3l=RT

1lQT

1lQ2lR3l=RT

1lOR3l=O

A𝛱grd =(

a𝜋1 a𝜋2a𝜋n

)=(

A1k A2k)

(1)⇔‖‖‖

(rll rl+1lrnl)T‖‖‖

2

≥‖‖‖

(rlj rl+1jrnj)T‖‖‖

2

⇔‖‖‖Q2(l−1)(

rll rl+1lrnl)T‖‖‖

2

≥‖‖‖Q2(l−1)(

rlj rl+1jrnj)T‖‖‖

2

⇔‖

‖‖resi((

a𝜋

1a𝜋

l−1

),a𝜋

l

)T

‖‖

2

≥‖‖

‖‖resi((

a𝜋

1a𝜋

l−1

),a𝜋

j

)T‖‖

‖‖

2

pivotQRk(A) =‖R3k2=‖Q2kR3k2=‖resi(A1k,A2k)‖2

(8)

3 Evaluation from above We bound pivotQRSVD k(A)

k(A) from above in this section. Since pivotQRk(A) =SVDk(A) =0 holds if rank(A)≤k holds, we only consider the case rank(A)>k . Let A=U𝛴VT be one SVD. Since A𝛱

grd=U𝛴(𝛱T

grdV)T and (𝛱T

grdV)T(𝛱T

grdV) =In hold, U𝛴(𝛱T

grdV)T is one SVD of A𝛱grd . Then, we can see that

Hereafter, we change what A represents. The previous A𝛱grd is replaced by A. Let Am×n that satisfies

be partitioned as

where A1km×k and rank(A1k) =k . We should compare 𝜎

k+1(A) =SVDk(A) and

‖resi(A1k,A2k)‖2=pivotQRk(A).

Lemma 6 Let mn, Am×n, and Bm×l. For any vl,

holds.

Proof From the definition of resi,

holds. Thus,

holds. ◻

We can see that

from the definition of 2-norm and Lemma 6. Now, we introduce an essential theo- rem of this paper.

SVDk(A) =𝜎

k+1(A) =𝜎

k+1(A𝛱grd).

(12)

‖‖

‖resi((

a1ai−1) ,ai)‖‖‖

≥ ‖‖‖resi((

a1ai−1) ,aj)‖

‖‖(i=1,…,k, j=i+1,…,n)

A=(

a1 a2an)

=(

A1k A2k)

resi(A,B)v=resi(A,Bv)

resi(A,B) =BAAB

resi(A,B)v=BvAABv=resi(A,Bv)

‖resi(A1k,A2k)‖2= max

z∈n−k,‖z=1‖resi(A1k,A2k)z‖

= max

z∈n−k,‖z=1‖resi(A1k,A2kz)

(9)

Theorem 2 Let mn>1, Am×n, rank(A) =n, and A be partitioned as

We define Âi as

for i=1, 2, ..., n, and di as

for i=1, 2, ..., n. Then, di0 for i=1, 2, ..., n and

hold.

Proof Since rank(A) =n , {a1,a2,…,an} is linearly independent. Because di is a lin- ear combination of {a1,a2,…,an} with the coefficient of ai being 1, di0 holds for i=1 , 2, ..., n. From the definition of resi,

holds, where x1 =1a1 . Let x1=(

x12 x13x1n)T

. Let i be one of 2, 3, ..., n. We can see that

holds if x1i≠0 from Lemma 3. Thus,

holds. This (13) also holds if x1i=0 . We define ym as A=(

a1 a2an) .

i=(

a1ai−1 ai+1an)

di=resi(i,ai)

a1

d1‖ ≤

n i=2

ai

di

d1=a11x1

di‖≤

��

��

��

��

��

��

��

��

�� aii

⎛⎜

⎜⎜

⎜⎜

⎜⎜

⎜⎜

1 x1i

x12

x1i

x1i−1

x1i

x1i+1

x1i

x1n

x1i

⎞⎟

⎟⎟

⎟⎟

⎟⎟

⎟⎟

��

��

��

��

��

��

��

��

��

= ‖d1

x1i

(13)

x1i�≤ ‖d1

di

y=d1+x1nan=a1

n−1 i=2

x1iai.

(10)

Since {a1,a2,…,an−1} is linearly independent, y0 holds. As Lemma 1 gives 1Td1=0 , we have (an,d1) =0 . Thus,

holds. We can see that

holds from Lemma 3 because y is a linear combination of ai(i=1, 2,…,n−1) . Since

and ‖dn>0 hold,

holds. Furthermore, since

holds from (13),

holds, and the theorem has been proved. ◻

We refer to an essential theorem by Hong et al.

Theorem  3 [9, p. 218] Let m≥n>l , Am×n and A=QR=U𝛴VT be a QR decomposition and an SVD, respectively. Let R and V be partitioned as

x1n= (an,y)

an2

dn‖≤�

��

an−(y,an)

y2 y

��

dn2=‖dn2

d12

��

��

y−(an,y)

an2an���

��

2

= ‖dn2

d12y2

1− (an,y)2

y2an2

� ,

��

��an−(y,an)

y2 y

��

2

=‖an2

1− (an,y)2

y2an2

an

dn‖≥ ‖y

d1

y‖≥‖a1‖−

n−1 i=2

x1i�‖ai

≥‖a1‖−

n−1 i=2

d1

di‖‖ai

an

dn‖ ≥ ‖a1

d1‖−

n−1 i=2

ai

di

R=

(R1l R2l O R3l )

, V =

(V1l V2l V3l V4l )

(11)

where R1ll×l and V1ll×l.

holds.

In the present study, this theorem is only used for l=n−1 . The following lemma provides an inequality between resi and the singular value.

Lemma 7 Under the same assumptions as Theorem 2,

holds.

Proof Let A=U𝛴VT be an SVD partitioned as

where V1n×(n−1) . Let ei be the ith column of In for i=1 , 2, ..., n. Define a permu- tation matrix 𝛱

i as

for i=1 , 2, ..., n. Since

and (𝛱T

i V)T(𝛱T

i V) =In , U𝛴(𝛱T

i V)T is one SVD of (i ai)

. Let A𝛱

i=QiRi be a QR decomposition partitioned as

where Qi1m×(n−1),Ri1(n−1)×(n−1) . Using Theorem 3,

holds. We can see that

holds from Lemma 5. Thus,

holds. Then,

R3l2𝜎n−l(V4l)≤𝜎

l+1(A)

1≤(𝜎

n(A))2

n i=1

1

di2

V=( V1 v2)

, v2=(

v21 v22v2n)T

𝛱i=(

e1ei−1 ei+1en ei)

(i ai)

=A𝛱i=U𝛴(𝛱T

i V)T

Qi=( Qi1 qi2)

,Ri=

(Ri1 ri2 O ri3 )

𝜎n(A) =𝜎n(A𝛱i)≥|v2i| |ri3|

di‖=‖resi(i,ai)‖=‖ri3qi2‖=�ri3

𝜎n(A)≥�v2i� ‖di

(12)

holds. ◻ Proposition 2 Let mn>k and Am×n satisfy (12) with being partitioned as where A1km×k. Let A satisfy rank(A1k) =k. Then, for all zn−k with z‖=1,

holds.

Proof From (12) and Lemma 6, the following holds for i=1 , 2, ..., k:

Define A as

If rank(A)≠k+1 , then {a1,a2,…,ak,A2kz} is linearly dependent. Since

rank(A1k) =k , {a1,a2,…,ak} is linearly independent, and A2kz can be expressed as a linear combination of {a1,a2,…,ak} . Then, we have

resi(A1k,A2kz) =0 from Lemma 3, and the conclusion holds. Therefore, we only consider the case rank(A) =k+1 in the remainder of this proof. We define di as

From Lemma 4, we can see that

holds for i=1 , 2, ..., k and j=i , i+1 , ..., k, where Aijk =(

aiaj−1 aj+1ak A2kz) , and 1=

n i=1

(v2i)2≤(𝜎

n(A))2

n i=1

1

di2

A=(

a1 a2an)

=(

A1k A2k)

‖resi(A1k,A2kz)‖≤

�4k−1

3 (n−k) +1𝜎

k+1(A)

(14) (n−k)

‖‖resi((

a1ai−1) ,ai)‖

‖‖

2

n j=k+1

‖‖

‖resi((

a1ai−1) ,aj)‖

‖‖

2

=‖‖‖resi((

a1ai−1) ,A2k)‖‖‖

2 F

≥‖‖

‖resi((

a1ai−1) ,A2k)‖‖

2 2

≥‖‖‖resi((

a1ai−1) ,A2kz)‖

‖‖

2

.

A=(

a1 a2ak A2kz) .

di =resi((

a1ai−1 ai+1ak A2kz) ,ai)

(i=1, 2,…,k).

dj =resi( resi((

a1ai−1) ,Aijk)

, resi((

a1ai−1) ,aj))

(13)

holds for i=1, 2, ..., k. Using Theorem  2 on resi((

a1 a2ai−1) ,(

ai ai+1ak A2kz))

, we can see that

holds. Thus,

holds for i=1 , 2, ..., k from (12) and (14). Thus,

holds. We want to show that

and prove this using induction in the order of i=k , k−1 , ..., 1. Applying (15) for i=k gives

resi(A1k,A2kz)

= resi( resi((

a1ai−1) ,(

aiak)) , resi((

a1ai−1) ,A2kz))

‖‖

‖resi((

a1 a2ai−1) ,ai)‖

‖‖

‖‖

‖resi( resi((

a1ai−1) ,Aiik)

, resi((

a1ai−1) ,ai))‖‖

k j=i+1

‖‖

‖resi((

a1 a2ai−1) ,aj)‖‖‖

‖‖

‖‖resi (

resi((

a1ai−1) ,Aijk

) , resi((

a1ai−1) ,aj))‖‖‖‖

+

‖‖

‖resi((

a1 a2ai−1) ,A2kz)‖

‖‖

‖‖

‖resi( resi((

a1ai−1) ,(

aiak)) , resi((

a1ai−1)

,A2kz))‖‖

��

�resi��

a1 a2ai−1� ,ai����

di

k j=i+1

��

�resi��

a1 a2ai−1� ,aj���

dj‖ +

��

�resi��

a1 a2ai−1� ,A2kz���

‖resi(A1k,A2kz)

≤ ���resi��

a1 a2ai−1� ,ai����

k

j=i+1

1

dj‖+

nk

‖resi(A1k,A2kz)

1 (15)

di‖ ≤

k j=i+1

1

dj‖+

nk

‖resi(A1k,A2kz)‖ (i=1, 2,…,k)

1 (16)

di‖ ≤ 2k−ink

‖resi(A1k,A2kz)‖ (i=1, 2,…,k)

1

dk‖ ≤

nk

‖resi(A1k,A2kz)‖ = 2k−knk

‖resi(A1k,A2kz)‖.

(14)

Thus, (16) is shown in case i=k . Then, we prove that (16) holds for i=l , assuming that (16) holds for i=l+1 , l+2 , ..., k. We can see that

holds from (15) and the assumption of induction. Thus, (16) has been shown in case i=1 , 2, ..., k. Using Lemma 7 on A,

holds. Thus,

holds. Now, if we can show that

then the proof is complete. Considering the fact that

we want a subspace 𝛩 that satisfies

Let

1

dl‖ ≤

k j=l+1

1

dj‖+

nk

‖resi(A1k,A2kz)

nk

‖resi(A1k,A2kz)

k

j=l+1

2k−j+1

= 2k−lnk

‖resi(A1k,A2kz)

1≤(𝜎

k+1(A))2

k

i=1

1

di2 + 1

‖resi(A1k,A2kz)2

≤ (𝜎

k+1(A))2

‖resi(A1k,A2kz)2

� (n−k)

k i=1

4k−i+1

= (𝜎

k+1(A))2

‖resi(A1k,A2kz)2

�4k−1

3 (n−k) +1

‖resi(A1k,A2kz)‖≤

�4k−1

3 (n−k) +1𝜎

k+1(A)

𝜎k+1(A)≤𝜎

k+1(A),

𝜎k+1(A) = max

𝛩,dim𝛩=k+1 min

x∈𝛩,‖x=1Ax‖,

x∈𝛩min,‖x=1Ax‖≥𝜎

k+1(A).

𝛩=span {

e1,e2,…,ek, (0

z )}

.

(15)

Then, we have dim(𝛩) =k+1 since {

e1,e2,…,ek, (0

z )}

is linearly independent.

Let y= (yi) ∈k+1 . Since (

e1 e2ek (0

z ))T(

e1 e2ek (0

z ))

=Ik+1 holds,

holds. For all yk+1 that satisfies the right-hand side of (17),

holds. Then,

holds. ◻

Thus, we have proved that

4 Evaluation from below

In this section, we show that the inequality proved in the previous section is tight. An example of matrix Rh with real-valued parameter h that satisfies

is shown. Rh is as follows:

The Kahan matrix is [10]

(17)

y‖=1⇔

��

��

��

k i=1

yiei+yk+1

0 z

������

=1

��

��

�� A

k

i=1

yiei+yk+1

0 z

�����

��

=‖Ay‖≥𝜎

k+1(A)

𝜎k+1(A)≥ min

x∈𝛩,‖x‖=1Ax‖≥𝜎

k+1(A)

pivotQRk(A)≤

√4k−1

3 (n−k) +1SVDk(A).

pivotQRk(Rh) SVDk(Rh)

���������������→

h→0

√4k−1

3 (n−k) +1

Rh=

1 0 0

0 h

0

0 0 hk

O

O O

1

1h2

1h2 1h2

0 1

1h2 1h2

0 0 1 1

O

.

(16)

Therefore, Rh is the same as the Kahan matrix in case m=n=k+1 and is an exten- sion of the Kahan matrix otherwise.

Proposition 3 Let mn>k. Define 𝛴hm×n, (whij) =Whn×n, and Rhm×n as follows:

and Rh=𝛴

hWh where 0<h<1. Then,

holds.

Proof Let Q= (In

O )

m×n and R=diag(1,h,…,hk, 0, 0,…, 0)Whn×n . Since R is an upper triangular matrix and QTQ=In holds, Rh=QR is one QR decomposi- tion. We check (1) for this R. Since

(1) holds for l=1 , 2, ..., k+1 , j=l+1 , l+2 , ..., n. Obviously (1) also holds for l=k+2 , k+3 , ..., n−1 , j=l+1 , l+2 , ..., n. As in Sect. 2, let R be partitioned as

where R1kk×k . Then, Kn =

⎛⎜

⎜⎜

1 0 … 0

0 h ⋱ ⋮

⋮ ⋱ ⋱ 0

0 … 0 hn−1

⎞⎟

⎟⎟

⎛⎜

⎜⎜

⎜⎝

1 −√

1−h2 … −√ 1−h2

0 1 ⋱ ⋮

⋮ ⋱ ⋱ −√

1−h2

0 … 0 1

⎞⎟

⎟⎟

⎟⎠ .

𝛴h=

�diag(1,h,…,hk, 0, 0,…, 0) O

� ,

whij=

⎧⎪

⎨⎪

1 (i=jand 1≤ik)or(i=k+1 andk+1≤jn),

−√

1−h2 (i<jand 1≤ik),

0 otherwise

limh→0

pivotQRk(Rh) SVDk(Rh) =

√4k−1

3 (n−k) +1

(left side of (1)) =h2l−2= (1−h2)

min(j,k+1)−1

i=l

h2i−2+h2 min(j,k+1)−2

= (right side of (1)),

R=

(R1k R2k O R3k )

pivotQRk(Rh) =‖R3k2

(17)

holds. Define V(n−k)×(n−k) and v1n−k as follows:

where v2 , v3 , ..., vn−k are chosen such that VTV =In−k holds. We can choose them freely as long as this is satisfied. Since

holds, ‖R3k2=hk

nk holds. We consider the value of SVDk(Rh) =𝜎

k+1(Rh) . Considering the fact that

we want a subspace 𝛩 whose maxx∈𝛩,‖x=1Rhx‖ is small. Since vT

1vi=0 holds for i=2 , 3, ..., nk,

holds for i=2 , 3, ..., nk . We define yj=1 for j=k+1 , ..., n and define yj from j=k down to j=1 as yj=√

1−h2n

i=j+1yi . We define yn as (

y1 y2yn)T

Then, .

holds. Since

holds,

V =�

v1 v2vn−k

, v1= 1

nk

�1 1 … 1�T

R3k=hknk

v1 00T

𝜎k+1(Rh) = min

𝛩,dim𝛩=n−k max

x∈𝛩,‖x=1Rhx‖,

Rh

0 vi

=

⎛⎜

⎜⎝ R2k R3k O

⎞⎟

⎟⎠ vi=

⎛⎜

⎜⎜

⎜⎜

⎜⎜

⎜⎜

⎜⎜

−√

(n−k)(1h2)h0

−√

(n−k)(1h2)h1

−√

(n−k)(1h2)hk−1

nk hk 0 0

⋮ 0

⎞⎟

⎟⎟

⎟⎟

⎟⎟

⎟⎟

⎟⎟

vT1vi=0

Rhy=(

0 0 … 0(n−k)hk 0 0 … 0)T

limh→0y‖=���

�(n−k)2k−1 (n−k)2k−2 … (n−k)20 1 1 … 1����

=

�4k−1

3 (n−k)2+nk

Referenzen

ÄHNLICHE DOKUMENTE

In all our upper bound arguments we use the fact that since the initial set of centers is chosen from inside the convex hull of the input point set X (the initial centers are

In this work, we focus on the aggregation problem and present an approach to estimate the aggregation kernel in discrete, low rank form from given (measured or simulated) data..

The primal algorithm was applied to the dual of (1) and consequently pro- duced only an upper estimate of the optimal value, which is shown in Table 1

Problem (8) might be solved through a process similar to the Dantzig- Wolfe decomposition method, i.e., by coordinating via pricing mecha- nism solutions of the

In the DFOG method the popular direct discretization of optimal control problems is combined with an approximation strategy for sets based on distance functions and best

Our goal, in this article, is to approximate the leading eigenmatrix (that is asso- ciated to the rightmost eigenvalue of the linear operator) in a suitable low-rank manifold (which

Let us recall our motivation for studying the robustness of identifiable graphs: The sets of sensors and sources are known exactly and the structure of the bipartite network is

After showing that the Dual DO formulation is indeed equivalent, we showed the unique existence of the solution in the strong and classical sense, by invoking results for the