• Keine Ergebnisse gefunden

4.2 Saturation overshoots in porous media

5.1.5 Numerical experiments

5.1. ERROR ESTIMATION FOR KERNEL-BASED SYSTEMS 167 functionnp given by

s ≤r0 : np(r) :=









c(r0)(r +d(r0))2, r,≤ r0, n(r), r0 < r < rm, c(rm)(r +d(rm))2, r ≥rm,

r0 < s : np(r) :=









c(0)(r +d(0))2, r ≤ 0, n(r), 0 < r < r0, c(r0)(r +d(r0))2, r ≥ r0.

Proof. At first consider the penalty polynomials and definepν(x) =c(ν)(x+

d(ν))2. Then it is easy to validate that pν(ν) = n(ν), p0ν(ν) = n0(ν). Those polynomials will be used to extend nat appropriate points. Next, existence and uniqueness of rs have already been shown in Lemma 5.1.5. At first consider the case s < r0. As the necessary condition for a minimizerr ≥ r0

we have 0 = Ss0(r) = φ0(r)−Sr−ss(r), and since s < rs by Lemma 5.1.5, 0 = n(r) = φ0(r) − Ss(r) is also sufficient for r ≥ r0. Since n(s) = φ0(s) − Ss(s) = 0, we replace n(r) for r < r0 by pr0(r). Even though Ss0(r) >

0 ∀ r > rs by Lemma 5.1.6 and hence n(r) 6= 0 ∀ r > rs we see that limr→∞n(r) = 0. To avoid the Newton iteration being drawn to ∞ we replace n(r) by prm for r ≥ rm as we know that rs < rm ∀ s ∈ R+ from Lemma 5.1.14. For s > r0 we have 0 ≤ rs and so we replace n(r) byp0 for r ≤ 0 and n(r) by pr0 for r0 ≤ r since again n(s) = 0. Of course, s = r0 meansrs = r0 and no iterations have to be performed.

shall employ synthetic dynamical systems using kernel expansions as non-linearity. This allows to pursue experiments for systems that do not have an approximation partfˆ and hence EA.

Before we state the experimental setup, we introduce two more error esti-mator modifications. First, the convergence result from Theorem 5.1.10 mo-tivates a heuristic variant of our local estimators. This originates from the fact that the limit function ∆LSLE reproduces itself when used as a-priori bound within the iterations. Numerically (up to the integration error) this can be exploited using the error estimate from the previous time step as bound at the current time step. We will refer to this time-discrete method by “LSLE TD”. Experiments indicate that this variant indeed seems to bound the iterated LSLE estimators from below. Second, when setting

β(t) = ||f(x(t))−f(xr(t))||G

||x(t)−xr(t)||G

or β(t,µ) = ||f(x(t), t,µ)−f(xr(t), t,µ)||G

||x(t)−xr(t)||G ,

depending on the considered case, we obtain the smallest possible estima-tion for this estimator structure, since this is the best local Lipschitz constant at anyt ∈ [0, T]. As this version requiresx(t)it is not considered a practical error estimator but rather a comparative “Lower Bound”.

Our first test environment aims at a system without parameters but with additional input. Letd = 240000in order to represent a large-scale system, T = 20, G = Id. The kernel expansion (5.8, p.142) uses N = 20 and (xi)j := N−150 (i−1), i = 1. . . N, j = 1. . . d. The kernel used is a Gaussian K(x,y) = exp(− ||x−y||22) withγ = 224, which is chosen to have

K(xi,xj) < 10−5 ∀ |i−j| ≥ 2.

This way a certain locality of the kernel expansion is ensured. Finally, the

5.1. ERROR ESTIMATION FOR KERNEL-BASED SYSTEMS 169 expansion coefficient vectors are given via

(ci)j = exp(−(xi)j/15), i = 1. . . N, j = 1. . . d.

Further we set x0 = (1. . .1)T ∈ Rd and B = (1. . .1)T ∈ Rd×1. To empha-size the input-dependent behavior of our estimators, we choose l = 1 and two system inputs

u1(t) = 1 25sin

t 3

, u2(t) = 1

2e−(12−t)2,

which represent one oscillating and one localized input. We use an explicit Euler scheme with time-step ∆t = 0.05as solver.

In order to investigate the error estimator behavior, we decided to have means of controlling the projection subspace quality. Since the system setup is homogeneous in each component, we would obtain zero approximation error for any input using Vˆ = ˆW = (1, . . . ,1)T/√

d as projection ma-trices. Then, for a given degree θ we set the rotated subspace projection matrices to V := RVˆ, W := RWˆ for an orthogonal block matrix R :=

diag

R,ˆ Id−200

with

Rˆ := diag (R2, . . . ,R2) ∈ R200×200, R2 = cos(θ) −sin(θ) sin(θ) cos(θ)

! .

This way, θ continuously controls the quality of the projection subspace.

The results for the different error estimators can be seen in Figure 5.2. One immediately notices the large exponential growth rates of both the GLE and LSLE variant (without iterations). As we do not pose any assumptions on the stability of the considered systems, those rates are a necessary draw-back of rigorous error estimators. However, compared to the GLE, the LSLE has a significantly lower exponential increase rate, which again is reduced drastically to a useful level when using estimator iterations or the LSLE TD variant.

0 2 4 6 8 10 12 14 16 18 20 10−4

10−3

Time

Error estimates

True error GLELSLE LSLE, 1 It LSLE, 2 It LSLE, 5 It LSLE TD

0 2 4 6 8 10 12 14 16 18 20

10−4 10−3

Time

Relative error estimates

True error GLELSLE LSLE, 1 It LSLE, 2 It LSLE, 5 It LSLE TD

Figure 5.2: Absolute (left) and relative (right) error estimates usingθ =.0005and inputu1

Note here that the main results are the iterable LSLE estimator and the LSLE TD variant; the other estimators are included for comparison purposes.

The rigorosity of all estimators is verified as all are upper bounds of the true error. We also see that the LSLE TD variant is a tight lower bound for the it-erated LSLE estimator (LSLE, 5 It and LSLE TD are indistinguishable) which suggests that the LSLE TD variant is the numerical pendant of the itera-tion limit∆LSLE. Due to the very good performance in both computational cost and estimation sharpness this is the best estimator to use in practice.

Furthermore, the LSLE TD variant grows with the same rate as the “Lower Bound” estimation up to about t = 16, which shows the effectiveness of the local Lipschitz constants together with the a-priori bounds. We also see that iterations of the LSLE estimator improve the error bound, but e.g. the first and second LSLE iterations fall back to their specific increase rate af-ter a certain time. This occurs when the a-priori bound is too big to have a positive effect regarding the choice (5.15, p.151) from Proposition 5.1.7.

The left image in Figure 5.3 shows the computation time plotted against the error estimate for each estimator variant. In one extreme the full error com-putation takes about 20s (top left) but is of course the sharpest. The green square in the lower right corner denotes the GLE estimator, which is cheap (by computation-time) but too large to be usable. Now, comparing identical star-symbols, we see that estimator iterations also come with higher

compu-5.1. ERROR ESTIMATION FOR KERNEL-BASED SYSTEMS 171 tational costs while improving. Hence, the iteration number can be used as a balancing parameter between online run-time and error bound accuracy.

The right hand image of Figure 3 shows the output using u2 and θ = 0.05. We see that it is nicely bounded by the LSLE-TD estimator; note that the reduced and full models’ outputs are indistinguishable in this plot. Identical conclusions can be drawn by using either input for Figures 5.2 and 5.3.

100 1020 1040 1060

2 4 6 8 10 12 14

∆(T)

Comp. time [s]

True error GLELSLE LSLE, 1 It LSLE, 2 It LSLE, 5 It LSLE TD

0 2 4 6 8 10 12 14 16 18 20

0.9 1 1.1 1.2 1.3 1.4 1.5 1.6

time

norms

Full system Reduced system Lower bound Upper bound

Figure 5.3: Left: Computation times and output for u3 and θ = .05. Right: The output error bounded by the LSLE TD variant

Table 5.1 shows the estimated errors at T = 20 for different θ values and both inputs. Albeit the true error is of the same magnitude for both inputs

Model ||e(T)||G GLE LSLE LSLE 2 It LSLE 5 It LSLE TD Low. b.

θ= 0.0005, u1 1.3E−5 1.1E+73 3.2E+11 1.4E+2 6.6E−5 6.6E−5 2.2E−5 θ= 0.005, u1 1.3E−4 1.1E+74 3.2E+12 1.2E+5 6.7E−4 6.6E−4 2.2E−4 θ= 0.05, u1 1.3E−3 1.1E+75 3.2E+13 1.2E+8 2.3E+0 6.7E−3 2.2E−3 θ= 0.5, u1 1.3E−2 1.1E+76 3.1E+14 1.0E+11 9.2E+5 8.0E−2 2.2E−2 θ= 0.0005, u2 2.3E−5 1.1E+73 8.5E+11 2.3E+3 1.8E−4 1.8E−4 3.0E−5 θ= 0.005, u2 2.3E−4 1.1E+74 8.5E+12 1.0E+6 2.7E−3 1.8E−3 3.0E−4 θ= 0.05, u2 2.3E−3 1.1E+75 8.5E+13 5.6E+8 3.6E+2 1.9E−2 3.0E−3 θ= 0.5, u2 2.3E−2 1.1E+76 8.4E+14 3.4E+11 3.5E+7 3.6E+0 3.1E−2

Table 5.1: Errors of estimation runs for differentθvalues and inputsu1, u2atT = 20

and any θ, the estimators perform differently for u1 and u2. This table em-phasizes the huge improvements over many orders of magnitude from GLE to LSLE and again to the iterated LSLE versions.

Next, we will pursue experiments for affine-parameterized synthetic sys-tems. The test setting is the same as before but with some new quantities.

Let1 := (1. . .1)T ∈ Rdand chooseP = [0,1]×[0,10]×[−1,1]as param-eter domain. Now we assume a kernel expansion like (5.22, p.158) or (3.9, p.98), albeit without direct time-dependency (Kt ≡ 1). The expansion (5.22, p.158) usesN = 20and centers

xi := 50(i−1)

N −1 1, µi := 10(i−1)

N −1 (0,1,0)T, i = 1. . . N.

Further, KP is a Gaussian with γP = 5.3733. Note that KP only uses the second entry ofµ, whileµ{1,3}are ignored. Finally, the expansion coefficient vectors are given via ci = exp(−xi/15) ∈ Rd, i = 1. . . N and we define x0(µ) = µ31 as initial value. So parameter µ2 is an “expansion parameter”

influencing the system’s inner dynamics and µ3 sets the “initial value”; µ1 will be discussed later. As in [182, 181], we use θ = 0.05 to control V and hence the subspace quality, and we average the output usingC(t, µ) =

√ d1.

In the next figures, we compare the different estimated output errors||ew(t)||

usingµ2 = 5, µ3 = −0.2. Figure 5.4 shows the absolute and relative output

0 2 4 6 8 10 12 14 16 18 20

10−2 100 102

Time

Error estimates

True error GLELSLE LSLE, 1 It LSLE, 2 It LSLE, 5 It LSLE TD

0 2 4 6 8 10 12 14 16 18 20

10−3 10−2 10−1 100

Time

Relative error estimates

True error GLELSLE LSLE, 1 It LSLE, 2 It LSLE, 5 It LSLE TD

Figure 5.4: Left: AbsoluteL2 state space errors of estimators. Right: Relative errors for estimators, using µ2= 5, µ3=−0.2

errors over time. The improvement from the GLE over the LSLE variant is due to the local Lipschitz constant estimations and the LSLE iterations

fur-5.1. ERROR ESTIMATION FOR KERNEL-BASED SYSTEMS 173 ther improve the estimation by several orders of magnitude. The fact that the LSLE is indistinguishable from the discrete LSLE TD variant after five iterations shows a fast convergence of the iteration scheme and strongly supports the applicability of the heuristic LSLE TD variant as substitute for

ΘLSLE (t, µ). The left hand plot in Figure 5.5 shows the computation times against the estimated output errors at T = 20. All estimators are 10 −20 times faster than computing the full error. The GLE is the cheapest but coars-est coars-estimator, and the LSLE TD is slightly slower but yields the bcoars-est coars- estima-tion results. Moreover, the right hand plot of Figure 5.5 contains a parameter

100 105 1010 1015

2 4 6 8 10 12 14

∆(T)

Comp. time [s]

True error GLELSLE LSLE, 1 It LSLE, 2 It LSLE, 5 It LSLE TD

Figure 5.5: Left: Computation times for estimator variants. Right: Parameter sweep forµ2 [0,10], µ1 = 0, µ3=−0.2

sweep for µ2 ranging over [0,10]. Shown are the simulation outputs up to T = 20along with the error bounds of the LSLE TD in transparent light-red.

The error bounds stay sharp over the whole parameter range [0,10], even though the system’s dynamics change considerably for differentµ2.

Table 5.2 shows the output errors at T = 20 along with the computation times and the overestimation factors. We also observe that too many iter-ations of the LSLE estimation do not necessarily yield a relevant improve-ment. The LSLE TD estimator only overestimates by a factor of7.6.

In order to show the influence of external input we choose an

affine-para-metric input matrix

B(t,µ) =µ1

1 0

+ (1−µ1)

0 1

∈ Rd×2, with inputs

u1(t) =

2

5 sin(3t) e−(12−t)2

, u2(t) =

1

2 sin(2t)

4e−7(12−t)212e−(5−t)2

.

Both represent each an oscillating and a local stimulation of different kind.

For both settings Figure 5.6 show that the LSLE TD estimator gives good

es-Figure 5.6:Parameter sweep forinput shiftµ1[0,1]andu1(left) oru2(right)

timations over the parameter range. Finally, Figure 5.7 displays a 2D sweep forµ1, µ2 and the output and error bounds by LSLE TD at T = 20. One can

Name ∆(20) Time Overestimation

True error 3.650e03 21.43s 1.000e+ 00 GLE 3.682e+ 15 0.62s 1.009e+ 18 LSLE 3.251e+ 01 2.05s 8.907e+ 03 LSLE, 1 It 1.568e01 2.79s 4.295e+ 01 LSLE, 2 It 2.839e02 3.11s 7.779e+ 00 LSLE, 5 It 2.801e02 4.13s 7.674e+ 00 LSLE TD 2.801e02 0.90s 7.674e+ 00 Lower bound 3.652e03 44.02s 1.001e+ 00

Table 5.2:Estimator statistics atT = 20

5.2. ERROR ESTIMATION FOR DEIM REDUCED SYSTEMS 175