• Keine Ergebnisse gefunden

4.3 Stochastic Galerkin Matching (SGM)

4.3.6 Properties of Augmented Matrices

As mathematical functions can be defined on the basis of expansion coefficients, they can be defined likewise on the basis of augmented matrices. Before discussing how mathematical operations are translated to operations on augmented matrices, recapitulate the following properties already outlined in previous subsections:

1. If the expansion coefficients are available, the augmented matrix can be constructed with the effort of a matrix sum with D+ 1 (orP + 1) summands.

2. The augmented matrix contains the expansion coefficients in a plain form in the first column.

The idea when defining mathematical operations using augmented matrices is to construct the matrices from the expansion coefficients, perform the operations and retrieve the expansion coefficients from the resulting matrix.

The Sum of Two Stochastic Functions

Again, we first consider the sum of two stochastic functions Z(3)(ξ) =Z(1)(ξ) +Z(2)(ξ) depending on the same set of stochastic variables ξ. The augmented matrices (1) and(2) are constructed like in (4.26) and summed up, the resulting matrix should contain the expansion coefficients corresponding to the sum in the first column

zl(3) =(1)+(2)

l,0 =zl(1)+z(2)l (4.44) Hence, the sum of two augmented matrices corresponds to the sum of the coefficients and therefore represents the sum of the stochastic functions.

The Product of Two Stochastic Functions

For the product one might expect that the matrix product of two augmented matrices represents theproduct of the functions depending on stochastic variables and in fact the product is mapped quite well. Let us take a closer look at the underlying equations and consider the productZ(3)(ξ) =Z(1)(ξ)Z(2)(ξ) of two functions depending on the same set of stochastic variables. By augmenting Z(1)(ξ) and Z(2)(ξ) to (1) and (2), respectively, the augmented matrix (3) corresponding to Z(3)(ξ) may be written as

(3) =(1)(2). (4.45)

This operation occurs frequently in literature [8,10,224–226]. Results presented in literature involve up to 12 multiplications and show results which are in reasonable agreement with reference simulations.

As the formula (4.45) represents a multiplication of stochastic scalars, the matrices should commute. The order of products should make no difference. If the matrices commute, the entries of the resulting matrix must be equal for both possible orders

D

This can only be generally true, if the products of the linearization coefficients are equal em,k,iek,n,j =em,k,jek,n,i. (4.48) But this expression is not generally true, counter examples can easily be constructed. For example in the case of one stochastic variable and the choice of n= 1, m= 2, k = 0,i= 1, and j = 2 the expressions yield

e2,0,1e0,1,2 = 0 6=e2,0,2e0,1,1 =γ2. (4.49) Hence, augmented matrices do not commute in general

(1)(2) 6=(2)(1). (4.50)

Nevertheless, practical results have shown that the approach provides reasonable results.

To explain this, one needs to take a look at the first column of the resulting matrix. In this case, the linearization coefficients in (4.48) become

em,k,iek,0,j =em,k,jek,0,i, em,k,iδk,j =em,k,jδk,i,

em,k,i =em,k,i.

(4.51)

Hence, the first column – and therefore the retrieved expansion coefficients – are invariant to the order. In other words, even though the permutation property is not preserved in the matrices, it is preserved when extracting the expansion coefficients. After verifying the commutation property for the first column of augmented matrices, let us take a look at the

4.3 Stochastic Galerkin Matching (SGM)

resulting expansion coefficients

zm(3) =(3)

m,0 = XD

k=0

(1)

mk

(2)

k0

= XD

k=0 D

X

l=0 D

X

n=0

Zˆl(1)Zˆn(2)em,k,lek,0,n

=XD

l=0 D

X

n=0

Zˆl(1)Zˆn(2)em,n,l.

(4.52)

This formula is exactly the same as the one derived for the expansion coefficients (4.41).

Hence, there is no difference in determining the expansion coefficients using a double sum or performing the matrix product. Even though the matrix multiplication can require more operations (depending on the implementation) and requires more memory, sine the matrices need to be stored, this procedure is usually preferred in practice. The reason is that memory is usually not the bottleneck and matrix operations can be implemented very easily and elegantly as many programming languages support matrix multiplications inherently.

The Multiplicative inverse of a Stochastic Function

The problem of taking the multiplicative inverse of a stochastic function occurs frequently and will arise in the next section and the consecutive chapter. Without a rigorous proof, it is stated that the expansion coefficients that can be extracted from the inverse of the augmented matrix represent expansion coefficients of the multiplicative inverse of the corresponding stochastic function.

Z(1)(ξ) ˆ= (1) (4.53)

1

Z(1)(ξ) ˆ= (1)−1 (4.54)

To motivate this connection, consider (4.45) and replace (3) with the identity I. It is seen that the multiplicative inverse should correspond to the inverse augmented matrix, as

I=(1)(2) =(1)(1)−1. (4.55) By construction, the augmented matrix is square. As shown in [260], augmented matrices are positive definite under the sufficient condition that

Z(1)(ξ)>0, (4.56)

meaning the corresponding augmented matrix is invertible if the stochastic function is strictly positive for all possible realizations of ξ.

It is important to note that (4.56) is only a sufficient condition and not necessary. For exam-ple, consider a Gaussian distributed stochastic variable which has an infinite support, thus violating the condition (4.56). The augmented matrix corresponding toZ(1)(ξ) =µZ+σZξ with P = 2 reads

(1) =

µZ σZ 0 σZ µZ 2σZ

0 σZ µZ

(4.57)

The eigenvalues of this augmented matrix are µZ, µZ+ 3σZ/2, andµZ−3σZ/2. Hence, this augmented matrix is invertible and positive definite if µZ >3σZ/2, even though (4.56) is violated. Empirically, it is observed that the augmented matrix is invertible if the standard deviation is small compared to the mean. In other words, a negative realization of Z(1)(ξ) is unlikely. From now on, invertibility is assumed in all example cases.

Given that (1) is invertible, the question is if the coefficients that are extracted from the inverse converge towards the expansion coefficients of the multiplicative inverse. Empirically it is observed that this is the case. With an increasing order of approximation also the approximation of the multiplicative inverse improves. That this should be generally the case can be seen when expressing the inverse in form of theNeumann series

(1)−1 =X

i=0

I(1)i. (4.58)

Here, the inverse is represented in form of sums and products. As shown at the beginning of this section, these operations converge. Therefore, the inverse should converge as well for a sufficiently large order of approximation. Please note that a higher order of approximation will be necessary compared to the product because the truncation errors (made when conducting the product that is involved when evaluating the power) might add up.

The multiplicative inverse of a stochastic function can be represented by the inverse of the corresponding augmented matrix. This is observed empirically, the considerations made above shall not be seen as a rigorous mathematical proof, but rather an argumentation why this is plausible.