• Keine Ergebnisse gefunden

Anoteonmatrixdifferentiation Kowal,Pawel MunichPersonalRePEcArchive

N/A
N/A
Protected

Academic year: 2022

Aktie "Anoteonmatrixdifferentiation Kowal,Pawel MunichPersonalRePEcArchive"

Copied!
14
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Munich Personal RePEc Archive

A note on matrix differentiation

Kowal, Pawel

December 2006

Online at https://mpra.ub.uni-muenchen.de/3917/

MPRA Paper No. 3917, posted 09 Jul 2007 UTC

(2)

A note on matrix differentiation

Paweł Kowal July 9, 2007

Abstract

This paper presents a set of rules for matrix differentiation with respect to a vector of parameters, using the flattered representation of derivatives, i.e. in form of a matrix. We also introduce a new set of Kronecker tensor products of matrices. Finally we consider a problem of differentiating matrix determinant, trace and inverse.

JEL classification: C00

Keywords: matrix differentiation, generalized Kronecker products

1 Introduction

Derivatives of matrices with respect to a vector of parameters can be ex- pressed as a concatenation of derivatives with respect to a scalar parameters.

However such a representation of derivatives is very inconvenient in some applications, e.g. if higher order derivatives are considered, and or even are not applicable if matrix functions (like determinant or inverse) are present.

For example finding an explicit derivative of det(∂X/∂θ) would be a quite complicated task. Such a problem arise naturally in many applications, e.g.

in maximum likelihood approach for estimating model parameters.

The same problems emerges in case of a tensor representation of deriva- tives. Additionally, in this case additional effort is required to find the flat- tered representation of resulting tensors, which is required, since running numerical computations efficiently is possible only in case of two dimensional data structures.

In this paper we derive formulas for differentiating matrices with respect to a vector of parameters, when one requires the flattered form of resulting derivatives, i.e. representation of derivatives in form of matrices. To do this

(3)

we introduce a new set of the Kronecker matrix products as well as the gener- alized matrix transposition. Then, first order and higher order derivatives of functions being compositions of primitive function using elementary matrix operations like summation, multiplication, transposition and the Kronecker product, can be expressed in a closed form based on primitive matrix func- tions and their derivatives, using these elementary operations, the generalized Kronecker products and the generalized transpositions.

We consider also more general matrix functions containing matrix func- tions (inverse, trace and determinant). Defining the generalized trace func- tion we are able to express derivatives of such functions in closed form.

2 Matrix differentiation rules

Let as consider smooth functions Ω ∋ θ 7→ X(θ) ∈ Rm×n, Ω ∋ θ 7→

Y(θ)∈Rp×q, whereΩ⊂Rkis an open set. FunctionsX, Y associate am×n and p×q matrix for a given vector of parameters,θ = col(θ1, θ2, . . . , θk). Let the differential of the function X with respect to θ is defined as

∂X

∂θ =£ ∂X

∂θ1

∂X

∂θ2 . . . ∂X∂θ

k

¤

for ∂X/∂θi ∈Rm×n,i= 1,2, . . . , k.

Proposition 2.1. The following equations hold 1. ∂θ(αX) =α∂X∂θ

2. ∂θ(X+Y) = ∂X∂θ +∂Y∂θ

3. ∂θ(X×Y) = ∂X∂θ ×(Ik⊗Y) +X×∂Y∂θ

where α ∈ R and Ik is a k ×k dimensional identity matrix, assuming that differentials exist and matrix dimensions coincide.

Proof. The first two cases are obvious. We have

∂θ(X×Y) =£ ∂X

∂θ1 ×Y +X× ∂θ∂Y1 . . . ∂θ∂X

k ×Y +X× ∂θ∂Y

k

¤

∂X

∂θ1 . . . ∂X∂θ

k

¤×

Y · · · 0 ... ... ...

0 · · · Y

+X×£ ∂Y

∂θ1 . . . ∂θ∂Y

k

¤

= ∂X

∂θ ×(Ik⊗Y) +X×∂Y

∂θ

(4)

Differentiating matrix transposition is a little bit more complicated. Let us define a generalized matrix transposition

Definition 2.2. Let X = [X1, X2, . . . Xn], where Xi ∈ Rp×q, i = 1,2, . . . , n is a p×q matrix is a partition of p×nq dimensional matrix X. Then

Tn(X) .

X1, X2, . . . , Xn ¤ Proposition 2.3. The following equations hold

1. ∂θ(X) = Tk(∂X∂θ) 2. ∂θ(Tn(X)) = Tk×n(∂X∂θ)

Proof. The first condition is a special case of the second condition forn = 1.

We have

∂θ(T(n)(X)) =£

T(n)(∂θ∂X1) . . . T(n)(∂X∂θ

k) ¤

=h

∂X1

∂θ1, . . . , ∂X∂θn

1 . . . ∂X∂θ1

k, . . . ,∂X∂θn

k

i= T(k×n)

³∂X

∂θ

´

since

∂X

∂θ =£ ∂X1

∂θ1, . . . ,∂X∂θ1n . . . ∂X∂θ1

k, . . . ,∂X∂θn

k

¤

Let us now turn to differentiating tensor products of matrices. Let for any matrices X, Y, where X ∈ Rq is a matrix with elements xij ∈ R for i= 1,2, . . . , p,j = 1,2, . . . , q. The Kronecker product, X⊗Y is defined as

X⊗Y .

=

x11Y · · · x1qY ... . .. ...

xp1Y · · · xpqY

Similarly as in case of differentiating matrix transposition we need to intro- duce the generalized Kronecker product

Definition 2.4. Let X = [X1, X2, . . . Xm], where Xi ∈Rp×q, i= 1,2, . . . , m is a p×q matrix is a partition of p×mq dimensional matrix X. Let Y = [Y1, Y2, . . . Yn], whereYi ∈Rr×s, i= 1,2, . . . , nis a r×s matrix is a partition of r×ns dimensional matrix Y. Then

X⊗1nY .

= [X⊗Y1, . . . , X⊗Yn] X⊗mn Y .

= [X11nY, . . . , Xm1nY] X⊗1,mn1,n22,...,m,...,nss Y .

= [X⊗mn22,...,n,...,mss Y1, . . . , X⊗mn22,...,n,...,mssYn1] X⊗m1,mn1,n2,...,n2,...,mss Y .

= [X11,mn1,n22,...,m,...,nss Y, . . . , Xm11,mn1,n22,...,m,...,nssY] assuming that appropriate matrix partitions exist.

(5)

Proposition 2.5. The following equations hold 1. ∂θ(X⊗Y) = ∂X∂θ ⊗Y +X⊗1k ∂Y∂θ

2. ∂θ(X⊗mn11,...,n,...,mssY) = ∂X∂θk,m1,n11,...,n,...,mss Y +X⊗1,mk,n11,...,n,...,mss ∂Y∂θ Proof. We have

∂θ(X⊗mn11,...,n,...,mss Y) =£

∂θ1(X⊗mn11,...,n,...,mss Y) · · · ∂θ

k(X⊗mn11,...,n,...,mssY) ¤

∂X

∂θ1mn11,...,n,...,mss Y · · · ∂θ∂X

kmn11,...,n,...,mss Y ¤ +£

X⊗mn11,...,n,...,mss ∂θ∂Y

1 · · · X⊗mn11,...,n,...,mss ∂θ∂Y

k

¤

= ∂X

∂θ ⊗k,m1,n11,...,n,...,mssY +X⊗1,mk,n11,...,n,...,mss ∂Y

∂θ

SinceX⊗Y =X⊗11Y, in case of the standard Kronecker product we obtain

∂θ(X⊗Y) = ∂X

∂θ ⊗k1Y +X⊗1k ∂Y

∂θ = ∂X

∂θ ⊗Y +X⊗1k ∂Y

∂θ

In proposition 2.1 we have omitted the case of multiplication of a matrix by a scalar function, using proposition 2.5 we obtain

Proposition 2.6. Let α is a scalar function of θ and X is a matrix valued function of θ, X(θ)∈Rp×q. Then

∂θ(αX) =α×∂X

∂θ + ∂α

∂θ ⊗X

Proof. Expression αX can be represented as αX = (α⊗Ip)×X, where Ip

is a p×p dimensional identity matrix. Hence

∂θ(αX) = ∂

∂θ((α⊗Ip)×X) = ∂(α⊗Ip)

∂θ ×(Ik⊗X) + (α⊗Ip)× ∂X

∂θ

= (∂α

∂θ ⊗Ip)×(Ik⊗X) +α× ∂X

∂θ = ∂α

∂θ ⊗X+α× ∂X

∂θ

LetS is a set of smooth matrix valued functions Ω∋ θ 7→X(θ)∈Rp×q, whereΩ⊂Rkis an open set, for any integersp, q ≥1not necessary the same for all functions in S. LetdifS .

={∂X/∂θ :X ∈ S}. The setS may contain scalars and matrices, which are interpreted as constant functions.

(6)

Let ext(S) is a set of functions obtained by applying elementary matrix operations on the set S, i.e. ext(S) is a smallest set such that if X, Y ∈ ext(S), then matrix valued functions X+Y, X×Y, Tn(X), X⊗m1,...,mn1,...,nss Y if exist belong to ext(S), where n, n1, . . . , ns, m1, . . . , ms are any positive integers.

Theorem 2.7. dif(ext(S)) = ext(S ∪dif(S)).

Proof. By induction using propositions 2.1, 2.3, 2.5, 2.6.

The theorem 2.7 states, that derivatives of matrix valued functions ob- tained by applying elementary operations like summation, matrix multi- plication, generalized transposition and generalized Kronecker tensor prod- uct can be expressed as a combination of these functions and their deriva- tives using these elementary operations. Applying the theorem 2.7 to a set T = dif(ext(S)) we can see that also higher order derivatives can be ex- presses, using these elementary operations, as combinations of elementary functions S and their higher order derivatives.

3 Derivatives of matrix determinant, trace and inverse

Let us consider derivatives of matrix inverse, determinant and trace. We need to introduce the generalized trace defined analogously as the generalized transposition.

Definition 3.1. Let X = [X1, X2, . . . Xn], where Xi ∈ Rp×p, i = 1,2, . . . , n is a p×p matrix, is a partition of p×np dimensional matrix X. Then

trn(X) .

trX1,trX2, . . . ,trXn

¤

Proposition 3.2. The following equations hold 1. det(X)∂θ = det(X)×trk(X1× ∂X∂θ)

2. tr∂θn(X) = trk×n(∂X∂θ)

3. ∂X∂θ1 =−X1× ∂X∂θ ×(Ik⊗X1)

(7)

Proof. We have

∂det(X)

∂θ =h

det(X)

∂θ1 . . . det(X)∂θ

k

i

det(X) tr(X1× ∂X∂θ

1) . . . det(X)×tr(X1× ∂X∂θ

k) ¤

= det(X)×trk(X1× ∂X

∂θ )

∂trn(X)

∂θ =h

trn(X)

∂θ1 . . . tr∂θn(X)

k

i =£

trn(∂θ∂X1) . . . tr(∂θ∂X

k) ¤

= trk×n(∂X

∂θ ) Similarly

∂X1

∂θ =h

∂X1

∂θ1 . . . ∂X∂θ1

k

i=−£

X1∂θ∂X1X1 . . . X1∂θ∂X

kX1 ¤

=−X1×£ ∂X

∂θ1 . . . ∂θ∂X

k

¤×(Ik⊗X1) = −X1∂X

∂θ (Ik⊗X1) since in case of scalar parameterθ ∈R,∂det(X)/∂θ = det(X) tr(X1∂X/∂θ),

∂tr(X)/∂θ = tr(∂X/∂θ), and∂X1/∂θ =−X1(∂X/∂θ)X1 (see for exam- ple Petersen, Petersen, (2006)).

Let a set S and operation dif are defined as in the previous section.

Let ext2(S) is a set of functions obtained by applying elementary matrix operations and matrix determinant, trace and inverse on the setS, i.e. ext(S) is a smallest set such that if X, Y ∈ ext2(S), then matrix valued functions X +Y, X×Y, Tn(X), X⊗m1,...,mn1,...,nss Y, det(X), trn(X), X1 if exist belong to ext2(S), wheren, n1, . . . , ns, m1, . . . , ms are any positive integers.

Theorem 3.3. dif(ext2(S)) = ext2(S ∪dif(S)).

Proof. By induction using propositions 2.1, 2.3, 2.5, 2.6, 3.2.

4 Derivatives of function composition

Letf is a matrix valued function given by Rp ∋x7→f(x)∈Rm×n

and g is a vector valued function Ω ∋ θ 7→ g(θ) ∈ Rp. We can define a function composition Ω∋θ7→f(g(θ))∈Rm×n.

Proposition 4.1. The following condition holds

∂θf(g(θ)) = ∂f(g(θ))

∂x ׳∂g(θ)

∂θ ⊗In´

(8)

Proof. Let

f(x) =

f11(x) . . . f1n(x) ... . .. ...

fm1(x) . . . fmn(x)

where fij(x) are scalar valued functions. Then for s = 1, . . . , k

∂fij(x)

∂θs

=

p

X

k=1

∂fij(x)

∂xk

×∂xk

∂θs

= ∂fij(x)

∂x × ∂x

∂θs

since ∂x/∂θs is a column vector. Further f(x)

∂θs =

p

X

k=1

∂f11(x)

∂xk . . . ∂f∂x1n(x) ... . .. ...k

∂fm1(x)

∂xk . . . ∂fmn∂x(x)

k

×∂xk

∂θs =

p

X

k=1

∂f(x)

∂xk × ∂xk

∂θs

=h

∂f(x)

∂x1 . . . ∂f(x)∂x

p

In× ∂x∂θs1 ...

In× ∂x∂θsp

= ∂f(x)

∂x ×(∂x

∂θs ⊗In)

Finally f(x)

∂θs

= ∂f(x)

∂x ×£ ∂x

∂θ1 ⊗In . . . ∂θ∂x

k ⊗In ¤

= ∂f(x)

∂x ×(∂x

∂θ ⊗In)

5 Properties of the generalized Kronecker prod- uct

Proposition 5.1. For any matrices A, B 1. A⊗k1 B =A⊗B.

2. A⊗...,1,1,...

...,nk,1,...B =A⊗...,m...,nkk,...,...B.

3. A⊗...,1,1,...

...,nk,nk+1,...B =A⊗...,1,......,nk×nk+1,...B.

4. A⊗...,m...,1,nkk+1,mk+1,...,...B =A⊗...,m...,nk+1k×m,...k+1,...B.

assuming that the Kronecker products exist.

(9)

Proposition 5.2. For any matrices A, B, C

1. A⊗mn11,...,n,...,mkk(B+C) =A⊗mn11,...,n,...,mkkB +A⊗mn11,...,n,...,mkkC.

2. (A+B)⊗mn11,...,n,...,mkk C =A⊗mn11,...,n,...,mkkC+B⊗mn11,...,n,...,mkk C.

assuming that the Kronecker products exist and matrix dimensions coincide.

Proposition 5.3. For any matrices A, B, C, D

(AB)⊗mn11,...,n,...,mss(CD) = (A⊗C)×(B⊗mn11,...,n,...,mss D)

assuming that products AB and CD, as well as Kronecker products exist.

Proof. Observe that X⊗mn11,...,n,...,mss Y =X⊗mn11,...,n,...,ms,1s,1Y, and (AB)⊗11 (CD) = (A ⊗ C) × (B ⊗11 D), since (AB) ⊗ (CD) = (A ⊗ C) × (B ⊗ D). Let (AB)⊗mnkk,...,n,...,m1,11,1(CD) = (A⊗C)×(B⊗mnkk,...,n,...,m1,11,1D)for k ≥0. Then

(AB)⊗1,mnk+1k,...,m,nk,...,n1,11,1(CD)

(AB)⊗mnkk,...,n,...,m1,11,1(CD1) . . . ,(AB)⊗mnkk,...,n,...,m1,11,1(CDnk+1) ¤

(A⊗C)(B ⊗mnkk,...,n,...,m11,1,1D1) . . . ,(A⊗C)(B⊗mnkk,...,n,...,m1,11,1Dnk+1) ¤

= (A⊗C)×(B ⊗1,mnk+1k,...,m,nk,...,n1,11,1D) Similarly

(AB)⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1(CD)

=h

(AB1)⊗1,mnk+1k,...,m,nk,...,n1,11,1(CD) . . . ,(ABmk+1)⊗1,mnk+1k,...,m,nk,...,n1,11,1(CD) i

=h

(A⊗C)(B11,mnk+1k,...,m,nk,...,n1,11,1D) . . . ,(A⊗C)(Bmk+11,mnk+1k,...,m,nk,...,n1,11,1D) i

= (A⊗C)×(B ⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1D)

Proposition 5.4. For any matrices A, B of size p1×q1 and p2×q2

A⊗mn11,...,n,...,mss B = (A⊗B)×(Iq1mn11,...,n,...,mss Iq2) assuming that Kronecker product exists.

Proposition 5.5. For any matrices A, B, C

A⊗mn11,...,n,...,mss(B⊗C) = (A⊗mn11,...,n,...,mss B)⊗C assuming that Kronecker products exist.

(10)

Proof. Observe that X⊗mn11,...,n,...,mss Y = X⊗mn11,...,n,...,ms,1s,1Y, and A⊗11 (B⊗C) = (A⊗11B)⊗C, sinceA⊗(B⊗C) = (A⊗B)⊗C. LetA⊗mnkk,...,n,...,m1,11,1(B⊗C) = (A⊗mnkk,...,n,...,m1,11,1B)⊗C for k≥0. Then

A⊗1,mnk+1k,...,m,nk,...,n1,11,1(B⊗C)

A⊗mnkk,...,n,...,m1,11,1(B1⊗C) . . . , A⊗mnkk,...,n,...,m1,11,1(Bnk ⊗C) ¤

(A⊗mnkk,...,n,...,m1,11,1B1)⊗C . . . ,(A⊗mnkk,...,n,...,m1,11,1Bnk)⊗C ¤

= (A⊗1,mnk+1k,...,m,nk,...,n1,11,1B)⊗C Similarly

A⊗mnk+1k+1,n,mk,...,nk,...,m1,11,1(B⊗C)

=h

A11,mnk+1k,...,m,nk,...,n1,11,1(B ⊗C) . . . , Amk+11,mnk+1k,...,m,nk,...,n1,11,1(B⊗C) i

=h

(A11,mnk+1k,...,m,nk,...,n1,11,1B)⊗C . . . ,(Amk+11,mnk+1k,...,m,nk,...,n1,11,1B)⊗C i

= (A⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1B)⊗C

Proposition 5.6. For any matrices A, B, C

A⊗(B⊗mn11,...,n,...,mssC) = (A⊗B)⊗q,m1,n11,...,n,...,mssC

where q is a number of columns of the matrix A, assuming that Kronecker products exist.

Proof. Observe that X⊗mn11,...,n,...,mss Y = X⊗mn11,...,n,...,ms,1s,1Y, and A⊗(B⊗11 C) = (A⊗B)⊗11 C = (A⊗B)⊗q,11,1 C, since A ⊗(B ⊗C) = (A⊗B)⊗C and A⊗q1C =A⊗C if the Kronecker product exists. LetA⊗(B⊗mnkk,...,n,...,m11,1,1C) = (A⊗B)⊗q,m1,nkk,...,n,...,m1,11,1C for k ≥0. Let A= [A1, . . . , Aq]. Then

Ai⊗(B⊗1,mnk+1k,...,m,nk,...,n1,11,1C)

Ai⊗(B ⊗mnkk,...,n,...,m1,11,1C1) . . . , Ai⊗(B⊗mnkk,...,n,...,m11,1,1Cnk) ¤

(Ai⊗B)⊗mnkk,...,n,...,m11,1,1C1 . . . ,(Ai⊗B)⊗mnkk,...,n,...,m1,11,1Cnk

¤

= (Ai⊗B)⊗1,mnk+1k,...,m,nk,...,n1,11,1C Similarly

Ai⊗(B⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1 C)

=h

Ai⊗(B11,mnk+1k,...,m,nk,...,n1,11,1C) . . . , Ai⊗(Bmk+11,mnk+1k,...,m,nk,...,n1,11,1C) i

=h

(Ai⊗B1)⊗1,mnk+1k,...,m,nk,...,n1,11,1C . . . ,(Ai⊗Bmk+1)⊗1,mnk+1k,...,m,nk,...,n1,11,1 C i

= (Ai⊗B)⊗mnk+1k+1,n,mk,...,nk,...,m1,11,1C

(11)

Finally,

A⊗(B⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1C)

=h

A1⊗(B⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1 C) . . . , Aq⊗(B ⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1C) i

=h

(A1⊗B)⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1C) . . . ,(Aq⊗B)⊗mnk+1k+1,n,mkk,...,n,...,m1,11,1 C) i

= (A⊗B)⊗q,m1,nk+1k+1,n,mkk,...,n,...,m1,11,1C

Proposition 5.7. Let A is m×n matrix. Let B is p×q matrix. Then A⊗1qB = (Im1pIp)×(B⊗A)

Proof. LetAi is i-th column of AandBj is j-th column ofB. Let Ipk denotes k-th column ofp×pidentity matrix and letBji denotes element of B atj-th row and i-th column. Then

(Im1pIp)×(Bj⊗Ai) =£

Im⊗Ip1 . . . Im⊗Ipp ¤

×

 B1jAi

. . . BpjAi

=

p

X

r=1

(Im⊗Ipr)×(Ai⊗Brj) =

p

X

r=1

Ai⊗(Ipr×Brj) = Ai⊗Bj

Further

(Im1pIp)×(Bj⊗A) = (Im1p Ip)×£

Bj ⊗A1 . . . Bj⊗An ¤

A1⊗Bj . . . An⊗Bj ¤

=A⊗Bj (Im1pIp)×(B⊗A) = (Im1pIp)×£

B1⊗A . . . Bq⊗A ¤

A⊗B1 . . . A⊗Bq ¤

=A⊗1qB

Proposition 5.8. Let A is m×n matrix. Let B is p×q matrix. Then A⊗1,nm11,1,...,m,...,1,ns,1

s,1,q/m¯ B = (Im1pIp)×(B⊗mn11,...,n,...,mss A)

where m¯ =m1× · · · ×ms, assuming that the Kronecker products exist.

(12)

Proof. Proposition holds for s= 0. Let for any s≥0A⊗1,nms,1,...,ms,...,1,n11,1,q/m¯sB = (Im1pIp)×(B ⊗mnss,...,n,...,m11 A), wherem¯s =m1× · · · ×ms. Then

(Im1pIp)×(B⊗1,mns+1s,...,m,ns,...,n1 1 A)

= (Im1pIp)×£

B⊗mnss,...,n,...,m11 A1 . . . B⊗mnss,...,n,...,m11 Ans+1

¤

=h

A11,nms,1,...,ms,...,1,n11,1,q/,1 m¯s B . . . Ans+1nmss,...,n,...,m1,11,q/m¯sB i

=A⊗1,n1,1,ms+1,1,ns,...,1,n1,1

s,1,...,m1,1,q/m¯s B and

(Im1pIp)×(B ⊗mns+1s+1,n,ms,...,ns,...,m11 A)

= (Im1pIp)×£

B11,mns+1s,...,m,ns,...,n1 1 A . . . Bms+11,mns+1s,...,m,ns,...,n1 1 A ¤

=h

A⊗1,n1,1,ms+1,1,ns,...,1,n1,1

s,1,...,m1,1,q/m¯s+1B1 . . . A⊗1,n1,1,ms+1,1,ns,...,1,n1,1

s,1,...,m1,1,q/m¯s+1 Bms+1 i

=A⊗1,nm s+1,1,ns,...,1,n1,1

s+1,1,ms,1,...,m1,1,q/m¯s+1 B

Proposition 5.9.

(Im1qIq)1 =Iq1mIm

Proof. Observe that (Im1qIq) is an orthogonal matrix since this matrix can be obtained permuting columns of the matrix Imq. Hence (Im1q Iq)1 = (Im1qIq)T. Further

(Im1qIq)T =

Im⊗(Iq1)T . . . Im⊗(Iqq)T

=

(Iq1)T1mIm . . . (Iqq)T1mIm

=Iq1mIm

The second equality can be shown using for example proposition 5.7.

Proposition 5.10.

(ImknIq)1 = (Inmnkq/nIq/n)×(Ikqkm/kIm/k) assuming that the Kronecker product exists.

Proof. Observe that(Im1nIq)is an orthogonal matrix since this matrix can be obtained permuting columns of the matrix Imq. Hence (Im1nIq)1 =

(13)

(Im1nIq)T. Further

In⊗(Iq/n1mIm)×(Im1nIq)T =

(Iq/n1mIm)×Im⊗(Iq1)T . . .

(Iq/n1mIm)×Im⊗(Iqn)T

=

(Iq1)T1mIm . . . (Iqn)T1mIm

=Iq1mIm

The second equality can be shown using for example proposition 5.7. Hence (Im1nIq)1

In⊗(Iq/n1mIm)1´

×(Iq1mIm)

In⊗(Im1q/nIq/n

×(Iq1mIm)

= (Inmnq/nIq/n)×(Iq1mIm) Further

³ImknIq´1

(Ik⊗Im/k)⊗k,11,nIq´1

Ik⊗(Im/k1nIq1

=Ik⊗³

(Inm/knq/nIq/n)×(Iq1m/kIm/k

=Ik⊗(Inm/knq/nIq/n)×Ik⊗(Iq1m/k Im/k)

= (Inmnkq/nIq/n)×(Ikqkm/kIm/k)

Proposition 5.11. Let A is m×n matrix. Let B is p×q matrix. Then A⊗B = (Im1pIp)×(B⊗A)×(Iq1nIn)

Proof.

(Im1pIp)×(B⊗A) =A⊗1qB = (A⊗B)×(In1qIq)

= (A⊗B)×(Iq1nIn)1

6 Concluding remarks

Derived formulas requires matrix tensor products, which are absent, when representing derivatives as the concatenation of derivatives with respect to a scalar parameters. Hence, this approach may decrease numerical efficiency.

This problem however can be resolved using appropriate data structures.

(14)

References

[1] T.P. Minka. Old and new matrix algebra useful for statistics. notes, December 2000.

[2] K.P. Petersen and M. S. Petersen. The matrix cookbook. notes, 2006.

Referenzen

ÄHNLICHE DOKUMENTE

Since an h-convex function u : f2 --+ • (where ~2 is an h-convex, open subset of an arbitrary Camot group G) which is locally bounded above is also locally Lipschitz continuous,

Implizit: open Modulname macht die Namen des Moduls verfügbar Das Modul Pervasives ist immer

Implizit: open Modulname macht die Namen des Moduls verfügbar Das Modul Pervasives ist immer

This may seem strange: if a new type is always isomorphic to a subset of an existing type, how is this construction going to lead to a “rich” collection of types for

True_def: True ≡ ((λ x :: bool.. Foundations of Higher-Order Logic 3.3 Conservative Extension of Theories. The nature

• Bertrand Russell found paradox in Frege’s system and proposed the Ramified Theory of Types.. • Wrote Principia Mathematica with Whitehead, an attempt at developing basic

For the proof, we simply try to go through the construction of the solution in the proof of Theorem 11.1 and show regularity at each step, using our already established

This consisted on the presentation of two pure tones (f 1 and f 2 ) with a different appearance probability, such that one was rare in the sequence (deviant) and