• Keine Ergebnisse gefunden

3   Basics

3.3   Sensitivity Analysis Methods

3.3.1   Local approach

A local SA is usually performed by computing partial derivatives of the model outputs with respect to the model parameters and, thus, it is only applicable when the model outputs are

differentiable in a neighborhood of the actual value . The partial derivatives of the model output , , 1, … , with respect to the model parameters

, 1, … , , 1, … ,

Eq. 3-37 are called first-order sensitivity coefficients and form the Jacobian, also called sensitivity matrix, in this context

,

,

,

,

,

⋮ ⋱ ⋮

,

,

. Eq. 3-38

If the analytical solution of the model output , is known, the sensitivity matrix can be obtained by differentiation. This is usually not the case and, therefore, numerical methods have to be applied. The simplest method to calculate local sensitivities numerically is the finite-difference approximation. Thereby, one parameter is changed slightly at a time while the others are held fixed and the model is rerun

, , ,

1, … , , 1, … , ,

Eq. 3-39

where is the unit vector. This approximation is called forward difference. It is also possible to approximate the partial derivatives by backward or central differences, respectively:

, , ,

1, … , , 1, … ,

Eq. 3-40

, , ,

2 1, … , , 1, … ,

Eq. 3-41

The calculation of the sensitivity matrix requires 1 simulations of the model if forward or backward differences are applied and 2 simulations in the case of central differences. The

advantage of this method is that no extra code beyond the original ODE-solver is needed for the calculation of sensitivities; but on the other hand, it is difficult to find the right level of the parameter change because the accuracies of the sensitivities depend on it. If the change is too small, the difference between original and perturbed solution is too small and the round-off error is too high and if it is too large, the linear approximation fails. To consider different magnitudes of the parameters, it is advisable to choose the parameter change as a percentage from the actual value

Eq. 3-42

where 0 1.

To compare the local sensitivities, the entries of the sensitivity matrix have to be normalized.

One possible way to do this, is to evaluate the partial derivatives of the logarithm of the model outputs and multiply them with the actual values

,

, 1

1, … , , 1, … ,

Eq. 3-43

Thereby, the model output is shifted by one to avoid negative values of the logarithm if the model output is less than one.

However, a practical difficulty of the sensitivity matrix is often its size. If a model may consist of 25 variables and 20 parameters, the matrix has 500 elements and if, in addition, 100 time points are studied, then 5000 sensitivities have to be compared. Thus, it is necessary to summarize the sensitivity information. This can be achieved by using an objective function (Eq. 3-6) which converts the multivariate output of a model to a single value. The partial derivatives of the objective function, with respect to the model parameters, are then given by

1, … ,

Eq. 3-44

which is also known as the gradient of .

First-order local sensitivity coefficients of the objective function only give information about the change of single parameters; but it is also important to analyze the effect of changing several parameters, simultaneously. The principal component analysis is based on local sensitivities and can be used to estimate the effect of simultaneous parameter changes (Vajda et al. 1985, Bard 1974).

Assuming that is twice continuous differentiable in an open neighborhood of , then can be expanded to its Taylor series at the actual point

∆ 1

2∆ ∆ Eq. 3-45

where ∆ , is the gradient of at , and is the Hessian of at . If is a minimum of , then is 0 (see Theorem 3.1) and the objective function can be approximated by

1

2∆ ∆ . Eq. 3-46

Hereafter it is simplified assuming that 0 if is a minimizer of . The expression in Eq. 3-46 is then a quadratic approximation of the real shape of the objective function so that at any fixed 0 the inequality

0 1

2∆ ∆ Eq. 3-47

defines an ellipsoid in the parameter space with the principal axes defined by the Hessian (see Figure 3.11). The orientation of the ellipsoid with respect to the parameter axes is defined by the eigenvectors of the Hessian while the relative lengths of the axes are revealed by the eigenvalues of this matrix. The function is most sensitive to a change in

along the principal axis corresponding to the largest eigenvalue and is least sensitive to a change in along the principal axis corresponding to the smallest eigenvalue. If all principal axes of the ellipsoid are parallel to the axes of the parameter space, there is no synergistic effect among the parameters.

Figure 3.11: An approximated region defined by

This interpretation can be concretized by using the term principal component. A principal component is a new parameter obtained via linear combination of the original parameters

, Eq. 3-48

where is the matrix of normalized eigenvectors obtained by diagonalization (eigenvalue-eigenvector decomposition) of the Hessian :

Eq. 3-49 with the diagonal matrix Λ formed by the eigenvalues of . The objective function can be rewritten in dependency of the new parameters :

∆ ∆ ∆ ∆ ∆ ∆ ∆ . Eq. 3-50

This equation provides another explanation of why the eigenvectors of the matrix reveal the related parameters and why the corresponding eigenvalues show the relative weights of these parameter groups. It clearly displays the inverse relationship between the lengths of the axes and the square roots of the eigenvalues. The largest eigenvalue indicates the parameter group with the largest sensitivity to a change in corresponding to and the smallest eigenvalue indicates the parameter group with the smallest sensitivity to a change in corresponding to .

Example 3.16 (Bard 1974)

Consider, the two-dimensional case with

0.505 0.495

0.495 0.505 and 1. Thereby, Eq. 3-47 reduces to

0.505∆ 0.99∆ ∆ 0.505∆ 1. Eq. 3-51

The normalized eigenvectors of the matrix are 0.7071 0.7071 and 0.7071 0.7071 with the corresponding eigenvalues 1 and 0.01. From this follows

∆ ∆ 0.7071 0.7071

0.7071 0.7071 ∙ ∆

0.7071 ∆ ∆

0.7071 ∆ ∆

∆ ∆ 0.01∆

Hence, the principal axes have the lengths 1 and 10. It is clear from Figure 3.12 that if the parameters are changed simultaneously, large changes can be made before the boundary of 1 is exceeded. In fact, ∆ 7.701 satisfies Eq. 3-51.

On the other hand, if the changes in ∆ and ∆ are taken in opposite directions, the boundary is lower ∆ 0.701. Thus, the principal component Δ is very sensitive to changes and Δ is less sensitive.

Figure 3.12: The approximated region for the matrix with