• Keine Ergebnisse gefunden

Nonsmooth Evolutions

N/A
N/A
Protected

Academic year: 2021

Aktie "Nonsmooth Evolutions"

Copied!
49
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Nonsmooth Evolutions

Martin Brokate ∗∗

Contents

1 Some Basic Notions of Convex Analysis 1

2 Monotone Operators 8

3 The Bochner Integral 17

4 Evolution Equations 24

5 The play and the stop operator 30

6 Energetic Solutions 36

Course Notes, University of Graz, November 5 - 15, 2019

∗∗TU Munich and WIAS Berlin

(2)

1 Some Basic Notions of Convex Analysis

An introductory example. We consider the initial value problem

u0+ sign (u) = 0, u(0) =u0. (1.1) Here, sign (x) = 1 if x >0 and sign (x) =−1 if x <0. For the moment we do not fix the value of sign (0).

We look for solutionsu:R+ R. The only candidate which makes sense is u(t) =

(max{u0t,0}, if u0 0,

min{u0+t,0}, if u0 0. (1.2) Indeed, in the region {(t, v) : t R, v 6= 0} ⊂ R2 every solution of u0 + sign (u) = 0 satisfies (d/dt)|u|=−1. Since u(|u0|) = 0, for t >|u0| we cannot have u(t)6= 0.

The function (1.2) is Lipschitz continuous, but not differentiable at t =u0. Moreover, it depends continuously onu0. We obtain it as a solution of (1.1), almost everywhere inR+, if and only if we set sign (0) = 0.

We can avoid having to find the correct value of sign (0) if we consider sign as a set-valued function,

sign (x) =

1, x >0,

−1, x <0, [−1,1], x= 0,

(1.3) and write the differential equation as a differential inclusion

u0 + sign (u)30, u(0) =u0. (1.4) The solution (1.2) satisfies u0(t) + sign (u(t))30 almost everywhere in R+.

We will see in a moment that sign (x) =∂ϕ(x) forϕ(x) = |x|.

The definition (1.2) of the sign function has the property that every function α :RR with α(x)sign (x) for all xR is nondecreasing.

The situation is different for the initial value problem

u0 sign (u) = 0, u(0) =u0. (1.5) Here we have

u(t;u0) =

(u0+t , if u0 >0,

u0t , if u0 <0. (1.6) Since

limε↓0 u(t;ε) =t , lim

ε↑0 u(t;ε) = −t , (1.7)

it is not possible to define a solution u(·,0) : R+ R such that both uniqueness and continuous dependence on the initial value hold.

Some basic notions of convex analysis. In the following, all vector spaces are vector spaces over the real numbers.

(3)

Definition 1.1 (Epigraph)

Let V be a vector space, ϕ:V (−∞,∞]. The subset

epiϕ={(v, µ) :v V, µR, µϕ(v)} (1.8) of V ×R is called the epigraph of ϕ.

D(ϕ) = {v :v V, ϕ(v)<+∞} (1.9) is called the effective domain or simply domain of ϕ.

Denoting by pV :V ×RV the projection to the first component, we get

D(ϕ) = pV(epiϕ). (1.10)

We have epiϕ = if and only if ϕ(v) = +∞ for all v V. If this is not the case, ϕ is called proper.

Definition 1.2 (Convex function)

LetV be a vector space. A functionϕ:V (−∞,∞]is calledconvex, ifepiϕis convex.

Proposition 1.3 Let V be a vector space. A function ϕ:V (−∞,∞]is convex if and only if

ϕ(λv+ (1λ)w)λϕ(v) + (1λ)ϕ(w) (1.11) for all v, wV and all λ [0,1].

Proof:Direct from the definitions. 2

Definition 1.4 Let V be a vector space, K V convex. A function ϕ :K (−∞,∞]

is called convex, if the function ϕ˜:V (−∞,∞],

˜ ϕ(v) =

(ϕ(v), v K ,

+∞, v /K , (1.12)

is convex.

LetK V. The function IK :V [0,∞), IK(v) =

(0, v K

+∞, v /K , (1.13)

is called theindicator functionofK. This function is convex if and only if K is convex, since epiIK =K×R+.

Lemma 1.5 LetV be a vector space,ϕ:V (−∞,∞]convex. Then the sublevel sets {v :v V, ϕ(v)α} and {v :v V, ϕ(v)< α} are convex for all α(−∞,∞].

Proof:Direct from the definitions. 2

(4)

Definition 1.6 (Lower semicontinuity)

Let V be a Banach space. A function ϕ : V (−∞,∞] is called (weakly) lower semicontinuous if the sublevel sets

Mα ={v :v V, ϕ(v)α} (1.14)

are (weakly) closed for all α R.

Proposition 1.7 Let V be a Banach space. A function ϕ : V (−∞,∞] is (weakly) lower semicontinuous if and only if epiϕis (weakly) closed in V ×R.

Proof:“⇐”: Let α R. Then

Fα ={(v, α) :v V, ϕ(v)α}= epiϕ(V × {α}) (1.15) is (weakly) closed inV ×R, therefore, too, the sublevel setMα =jα−1(Fα); herejα :V V ×R denotes the embeddingjα(v) = (v, α).

“⇒”: We show that the complement of epiϕ is open. Let (v, α) / epif, so ϕ(v) > α.

Choose anε >0 such thatϕ(v)> α+ε. By assumption, the set U ={v :ϕ(v)> α+ε}is open inV, andW =U×(α−ε, α+ε) is an open neighborhood of (v, α) withW∩epiϕ=∅,

since ϕ(w)> α+ε > β for all (w, β)W. 2

Corollary 1.8 Let V be a Banach space. A subset K V is closed if and only if IK is lower semicontinuous.

Proof:“⇒”: epiIK =K×[0,∞).

“⇐”: K ={v :IK(v)0} . 2

Corollary 1.9 Let V be a Banach space, ϕ : V (−∞,∞] convex. Then ϕ is lower semicontinuous if and only if it is weakly lower semicontinuous.

Proof:In Banach space, a convex set is closed if and only if it is weakly closed. 2 Lemma 1.10 Let V be a Banach space, let ϕ : V (−∞,∞] be convex and lower semicontinuous. Then

ϕ(v)lim inf

n→∞ ϕ(vn) (1.16)

for all sequences vn* v.

Proof:Assume that vn * v, but ϕ(v)> lim infϕ(vn). Then there exists a subsequence {vnk}and an ε >0 such that ϕ(vnk)ϕ(v)ε =:α. Asϕ is lower semicontinuous, the sublevel set Mα is weakly closed, so ϕ(v)α, a contradiction. 2 Proposition 1.11 Let V be a reflexive Banach space, let ϕ: V (−∞,∞] be convex, lower semicontinuous and proper. Moreover, we assume that

lim

kvk→∞ϕ(v) =. (1.17)

Then there exists a uV such that

ϕ(u) = min

v∈V ϕ(v). (1.18)

(5)

Proof:Let{un} be a minimal sequence for ϕ in V, that is, ϕ(un) infv∈V ϕ(v). Due to (1.17), all sublevel set of ϕ and, hence, the sequence {un} are bounded in V. Since V is reflexive, there exists a subsequence {unk} with unk * u for some u V. From Lemma 1.10 it follows that

ϕ(u)lim inf

k→∞ ϕ(unk) = inf

v∈V ϕ(v).

2 Lemma 1.12 Let V be a Banach space, let ϕi : V (−∞,∞] be convex and lower semicontinuous for all iI. Then supi∈Iϕi is convex and lower semicontinuous.

Proof:

epi (sup

i∈I

ϕi) = \

i∈I

epiϕi.

2 Proposition 1.13 Let V be a Banach space, ϕ:V (−∞,∞] convex, lower semicon- tinuous and proper. Then

ϕ= sup{g|g :V R affine and continuous, gϕ}. (1.19) Proof:“≥”: Obvious.

“≤”: It suffices to show: If (v, a) V ×R with a < ϕ(v), then there exists an affine continuous function g : V R such that a g(v) and g ϕ. This is proved as a consequence of the separation theorem, applied in the space V ×R to such a point

(ϕ(v), a) and the closed convex set epiϕ. 2

Definition 1.14 (Subdifferential)

Let V be a Banach space, ϕ:V (−∞,∞]. A functional u V is called a subgradi- ent of ϕ in uV, if ϕ(u)< and

ϕ(v)ϕ(u) +hu, vui , for all v H. (1.20) The set

∂ϕ(u) = {u :u V, w is a subgradient for ϕ in u} (1.21) is called the subdifferential of ϕ in u.

We set ϕ(u) = if ϕ(u) = ∞.

If V = H is a Hilbert space, according to the Riesz isomorphism between H and H we also callwH a subgradient of ϕin uH, if ϕ(u)< and

ϕ(v)ϕ(u) +hw, vui , for all v H. (1.22) The subdifferential is then given by

∂ϕ(u) ={w:wH, w is a subgradient for ϕin u.} (1.23)

(6)

Example 1.15

(i) For ϕ : R R, ϕ(v) = |v|, we have ∂ϕ(u) = {1} if u > 0, ∂ϕ(u) ={−1} if u < 0, and∂ϕ(0) = [−1,1]. Thus, the subdifferential of the absolute value function equals the set-valued sign function.

(ii) For ϕ:RR,

ϕ(v) =

(1, v >0, 0, v 0, we have ∂ϕ(0) ={0}. But if we set

ϕ(v) =

(1, v 0, 0, v <0, we have ∂ϕ(0) =∅.

Definition 1.16 (Normal cone)

Let V be a Banach space, K V convex, u K. An u V is called a support functional for K in u if

hu, uvi ≥0, for all v K. (1.24) The set

NK(u) ={u :u V, u is a support functional for K in u} (1.25) is called the normal cone at K in u. For u /K we set NK(u) = ∅.

Again, ifV =H is a Hilbert space, one may replace the support functionals u H by elements w H with hw, uvi ≥0 for all v K. Then NK(u) becomes a subset of H instead ofH.

Lemma 1.17 Let V be a Banach space, K V convex. Then

∂IK(u) =NK(u), if uK, (1.26)

and ∂IK(u) = otherwise.

Proof:Direct from the definitions. 2

Proposition 1.18 Let V be a Banach space, ϕ:V (−∞,∞], let u V with ϕ(u)<

∞. Then

ϕ(u) = min

v∈V ϕ(v) 0∂ϕ(u). (1.27)

Proof:Direct from the definition. 2

(7)

Lemma 1.19 Let ϕ:R(−∞,∞] convex, uD(ϕ). Then d(t) = ϕ(u+t)ϕ(u)

t (1.28)

defines a nondecreasing function d:R\ {0} →(−∞,∞]. Moreover, d(−t) d(t) for all t >0.

Proof:We first considerd on (0,∞). For 0< s < t we have u+s= ts

t u+ s

t(u+t), therefore

ϕ(u+s) ts

t ϕ(u) + s

tϕ(u+t). We subtractϕ(u) and divide bys to obtain

d(s) = ϕ(u+s)ϕ(u)

s ϕ(u+t)ϕ(u)

t =d(t).

Now we consider d on (−∞,0). For this purpose, we define ˜ϕ : V (−∞,∞] by

˜

ϕ(r) = ϕ(2ur) Then ˜ϕis convex, and the corresponding difference quotient becomes d(t) =˜ ϕ(u˜ +t)ϕ(u)˜

t = ϕ(ut)ϕ(u)

t =−d(−t).

By what we have proved above, ˜dis nondecreasing on (0,∞). Therefore,dis nondecreasing on (−∞,0).

Finally, we show thatd(−t) =d(t) for all t >0. Indeed, ϕ(u) 1

2ϕ(ut) + 1

2ϕ(u+t). Thusϕ(u)ϕ(ut)ϕ(u+t)ϕ(u) and therefore

d(−t) = ϕ(ut)ϕ(u)

−t ϕ(u+t)ϕ(u)

t =d(t).

2 Proposition 1.20 Let H be a Hilbert space, let ϕ : H (−∞,∞] be convex, lower semicontinuous and proper, f H. Then the function J :H (−∞,∞],

J(v) = c

2kvfk2+ϕ(v), (1.29)

has a unique minimum uH, and

c(fu)∂ϕ(u). (1.30)

(8)

Proof:As ϕ is convex, lower semicontinuous and proper, the same is true for J. By Proposition 1.13, J has an affine minorant, that is, there exists w H and α R such that

ϕ(v)≥ hw, vi −α , for all v H. (1.31) It follows that

J(v)≥ kvkc

2kvk −ckfk − kwk

α , for all v H.

Thus, J(v) → ∞ for kvk → ∞. Now Proposition 1.11 implies that J has a minimum uH; this minimum is unique since J is strictly convex.

The optimality condition (1.30) can be obtained from the sum rule for subdifferentials.

Alternatively, an elementary proof is the following. Let v H be arbitrary. We set vt =u+t(vu), t[0,1].

For all t[0,1] we have 0J(vt)J(u) = c

2k(uf) +t(vu)k2 c

2kufk2+ϕ(vt)ϕ(u)

=cthuf, vui+ ct2

2 kvuk2+ϕ(vt)ϕ(u). Dividing by t yields

0chuf, vui+ ct

2kvuk2+ϕ(vt)ϕ(u) t

chuf, vui+ ct

2kvuk2+ϕ(v)ϕ(u),

the latter since the difference quotient is monotone according to Lemma 1.19. Passing to the limit t0 yields

0chuf, vui+ϕ(v)ϕ(u).

As v H was arbitrary,c(fu)∂ϕ(u) follows. 2

(9)

2 Monotone Operators

Let V, W be sets, R V ×W a relation. We may interpret a relation as a set-valued mapping in the following way. Given a relation R, we define

A:V → P(W), (2.1)

by

Av={w :wW, (v, w)R}. (2.2)

Instead of (2.1), we write

A :V W . (2.3)

Conversely, any set-valued mapping A:V W defines a relationR if we set (v, w)R wAv .

The domainand the rangeof A are defined by

D(A) ={v :v V, Av 6=∅}

im (A) = [

v∈V

Av (2.4)

Let A, B : V W be set-valued mappings which arise from relations R and S, respec- tively. B is called anextension ofAif RS; if moreoverR 6=S, the extension is called proper.

The inverse A−1 :W V of A:V W is defined as

A−1w={v :v V, wAv}. (2.5)

We haveD(A−1) = im (A) since

wAv v A−1w holds for all v V, wW.

LetW be a vector space, letA, B :V W and λR. We define λA ={(v, λw) :v V , wAv},

A+B ={(v, w+z) :v V , wAv , z Bv}, cl (convA) ={(v, w) :v V , wcl (convAv)}.

For the sum we haveD(A+B) =D(A)D(B), as M +=for every subset M of H.

Definition 2.1 (Monotone operator)

(i) Let V be a Banach space. An operator A:V V is called monotone if

hv2v1, v2 v1i ≥0, for all v1, v2 H, v1 Av1, v2 Av2. (2.6) (ii) Let H be a Hilbert space. An operator A:H H is called monotone if

hw2 w1, v2 v1i ≥0, for all v1, v2 H, w1 Av1, w2 Av2. (2.7) In both cases, A is called maximal monotone if A does not have a proper extension which is monotone.

(10)

Letf :RR be monotone nondecreasing. Then f(r) = [f˜ (r−), f(r+)], f(r−) := sup

t<r

f(t), f(r+) := inf

t>rf(t), (2.8) defines a maximal monotone operator ˜f :RR.

Using Zorn’s lemma one can show that every monotone operator has a maximal monotone extension.

Lemma 2.2 Let V be a Banach space, let A, B :V V monotone, λ0.

(i) A−1, λA, A+B and cl (convA) are monotone.

(ii) If A is maximal monotone, then Av is closed and convex for all v H.

The same is true if H is a Hilbert space and A, B :H H are monotone.

Proof:(i) follows directly from the definitions. (ii) holds since cl (convA) is a monotone extension ofA by (i), so A= cl (convA) if A is maximal monotone. 2 Definition 2.3 (Accretive operator)

Let V be a Banach space, A:V V. (i) A is called accretive if

kv2v1k ≤ k(v2+λw2)(v1+λw1)k (2.9) holds for all v1, v2 V, w1 Av1, w2 Av2 and all λ >0.

(ii) A is called maximal accretive if it is accretive and has no proper extension which is accretive.

(iii) A is called m-accretive if it is accretive and im (I +A) = V. Proposition 2.4 Let H be a Hilbert space, A:H H.

(i) Let v, wH. Then

hv, wi ≥0 kvk ≤ kv+λwk for all λ >0. (2.10) (ii) A is monotone if and only if A is accretive.

Proof:(i) This follows from the equality

kv+λwk2− kvk2 = 2λhv, wi+λ2kwk2. (ii) We apply (i) with v =v2v1 and w=w2w1, wherewi Avi. 2

Proposition 2.5 Let H be a Hilbert space, ϕ:H (−∞,∞] convex, lower semicontin- uous and proper. Then ∂ϕ:H H is m-accretive.

Proof:Letv1, v2 H and w1 ∂ϕ(v1),w2 ∂ϕ(v2). Adding the inequalities ϕ(v2)ϕ(v1)≥ hw1, v2v1i

ϕ(v1)ϕ(v2)≥ hw2, v1v2i

(11)

yields hw2 w1, v2 v1i ≥ 0, so ∂ϕ is accretive by Proposition 2.4. It remains to show that

im (I+∂ϕ) =H . Letf H be arbitrary. We define J :H(−∞,∞] by

J(v) = 1

2kvfk2+ϕ(v).

According to Proposition 1.20,J has a unique minimumuH, andfu∂ϕ(u). Thus

f im (I+∂ϕ). 2

Example 2.6 On H =L2(Ω), ΩRn open, we define

Av=−∆v , D(A) =H01(Ω)H2(Ω). (2.11) We want to prove that A :H H is m-accretive. Indeed, A :D(A) H is monotone (hence, accretive) because

hAv, vi= Z

−v(x)·∆v(x)dx= Z

k∇v(x)k2dx0, for all v D(A).

Therefore,A is m-accretive if and only if the boundary value problem

−∆u+u=f

has a solution u D(A) for every f L2(Ω). This is the case if Ω is bounded and ∂Ω is sufficiently regular. We refer to the variational theory of elliptic partial differential

equations, see e.g. [8]. 2

Definition 2.7 (Nonexpansive operator)

Let H be a Hilbert space. A set-valued operator A:H H is called nonexpansive if kw2w1k ≤ kv2v1k (2.12) holds for all v1, v2 H, w1 Av1, w2 Av2.

Setting v2 = v1 in (2.12) we see that, for all v H, Av has at most one element.

Nonexpansive operators are therefore single-valued mappings A:D(A)H.

Lemma 2.8 Let H be a Hilbert space, A : H H. Then A is accretive if and only if (I+λA)−1 is nonexpansive for all λ0.

Proof:This is a direct consequence of the definitions 2.3 and 2.7. 2 Lemma 2.9 Let H be a Hilbert space, A:H H accretive. Then there are equivalent:

(i) im (I+λA) = H for some λ >0, (ii) im (I +λA) =H for all λ >0.

(12)

Proof:Let im (I+λA) =H for someλ >0. Fixµ > λ/2. For arbitrarywH we define Tw :H H by

Tw(v) = (I+λA)−1λ

µw+ (1 λ µ)v

.

Since 0< λ/µ <2 and (I+λA)−1 is nonexpansive, Tw is a contraction on H. Let v H be the fixed point ofTw. Then

λ

µw+ (1 λ

µ)v (I+λA)v .

Subtracting v and multiplying by µ/λ gives w v +µAv. As w was arbitrary, im (I+ µA) = H for µ > λ/2. Now (ii) follows by induction. 2 Proposition 2.10 (Characterization of maximal monotone operators)

Let H be a Hilbert space, A:H H. The following are equivalent:

(i) A is maximal monotone.

(ii) A is maximal accretive.

(iii) A is m-accretive.

(iv) A is accretive and im (I+λA) =H for all λ >0.

Proof:“(i)⇔(ii)”: This follows, since by Proposition 2.4(ii) an extension ofAis monotone if and only if it is accretive.

“(iii)⇔(iv)”: This is a direct consequence of Lemma 2.9.

“(iii)⇒(ii)”: Let B be an accretive extension of A, let w Bv. Since A is m-accretive, there exists uD(A) such that w+v u+Au. Thus

w+v u+Bu , w+v v+Bv ,

so u, v (I+B)−1(w+v). Since (I+B)−1 is nonexpansive, we must have u =v, thus w+v v +Av and finally wAv. It follows thatA=B.

“(i)⇒(iii)”: This proof is long and delicate, it relies on a minimax theorem (an existence result for saddle points) which in turn is based on Brouwer’s fixed point theorem. We

refer to [5] and to [2], Theorem 2.2. 2

Lemma 2.11 (Lipschitz perturbation)

Let A : H H be m-accretive, B : H H accretive and Lipschitz continuous with D(B) =H. Then A+B is m-accretive.

Proof:Choose λ > 0 small enough such that λB : H H is a contraction. For every wH we have

u+λAu+λBu3w u=Twu:= (I+λA)−1(wλBu)

As (I +λA)−1 is nonexpansive, Tw : H H is a contraction and thus has a fixed point u H. Therefore, w im (I +λ(A+B)). Since w was arbitrary, it follows from

Proposition 2.10 thatA+B is m-accretive. 2

Part (ii) of the following lemma is a variant of what is known asMinty’s trick. It allows to pass to the limit in the scalar product although both factors converge only weakly.

(13)

Lemma 2.12 Let A:H H be m-accretive, let wn Avn, vn* v and wn * w in H.

(i) If

lim inf

n→∞ hwn, vni ≤ hw, vi then wAv.

(ii) If

lim sup

n→∞

hwn, vni ≤ hw, vi then wAv and hwn, vni → hw, vi.

In particular, if one of the sequences converges strongly, then the scalar product converges and wAv holds.

Proof:(i) Let ˜v D(A) and ˜wv. AsA is accretive,

0≤ hwnw, v˜ nvi˜ =hwn, vni − hwn,˜vi − hw, v˜ ni+hw,˜ vi˜ . Passing to the limit inferior for n→ ∞ yields

0lim inf

n→∞ hwn, vni − hw,˜vi − hw, vi˜ +hw,˜ vi ≤ hw˜ w, v˜ vi˜ . (2.13) Thus ˜A :H H defined by ˜Av =Av∪ {w} and ˜v =v for ˜v 6=v is monotone. As A is maximal, we must have wAv.

(ii) Setting ˜v =v and ˜w=w in (2.13) yields hw, vi ≤lim inf

n→∞ hwn, vni

which, together with the assumption, implies hwn, vni → hw, vi. 2 An important tool for the analysis of accretive resp. monotone operators are approxima- tions by single-valued mappings, in particular the following one.

LetA:H H be accretive. By Lemma 2.8,

Jλ := (I+λA)−1

is a nonexpansive operator for every λ >0, withD(Jλ) =H if A is m-accretive.

Definition 2.13 (Yosida regularization)

Let A : H H be accretive, λ >0. The Yosida regularization Aλ : H H of A is defined as

Aλ = 1

λ(IJλ). (2.14)

Let ϕ : H (−∞,∞] be convex, lower semicontinuous and proper. We have seen in Proposition 1.20 that

v 7→ 1

kvuk2+ϕ(v) has a unique minimum in H.

(14)

Definition 2.14 (Moreau regularization)

Let ϕ:H (−∞,∞] convex, lower semicontinuous and proper, letλ >0. The function ϕλ(u) = min

v∈H

1

kvuk2+ϕ(v)

(2.15) is called the Moreau regularization of ϕ.

Example 2.15 (i) Let H =R, ϕ= I{0}, that is, ϕ(r) = 0 for r = 0 and ϕ(r) = for r6= 0. The Moreau regularization of ϕ is

ϕλ(r) = min

s∈R

1

|sr|2+ϕ(s)

= 1 r2. The subdifferential β=∂ϕ:RRis

β(r) = (

R, r = 0,

, r 6= 0.

We haver(I+λβ)(s) if and only if s = 0 andr R, thus (I+λβ)−1 = 0. The Yosida regularization of β becomes

βλ(r) = r λ.

(ii) LetH =R, ϕ(r) = |r|. The Moreau regularization of ϕis given by

ϕλ(r) =

rλ2 , r > λ ,

r2

, |r| ≤λ ,

−rλ2 , r <−λ . The subdifferential β=∂ϕ equals the set-valued sign function

β(r) =

1, r >0, [−1,1], r = 0,

−1, r <0.

We haver(I+λβ)(s) if and only if r=s+λ (if s >0) resp. r=sλ (if s <0) resp.

rs+ [−λ, λ] = [−λ, λ] (ifs = 0). It follows that

Jλ(r) =

rλ , r > λ , 0, r[−λ, λ], r+λ , r <−λ . Therefore, the Yosida regularization of the sign function is

βλ(r) = rJλ(r)

λ =

1, r > λ ,

r

λ, r[−λ, λ],

−1, r <−λ .

Referenzen

ÄHNLICHE DOKUMENTE

If, for the series of tests, when the number K of tests tends to infinity and a probability estimation, frequency F tends at that to a probability P, a rupture r

Эти разрывы в шкале вероятностей существуют и для оценки вероятности и для вероятности только тогда и только для тех случаев , когда и в

В более общем виде, для возможных значений: Если для функции, определенной на конечном интервале, существует ненулевой разрыв, запрещенная зона

The second Theorem reads in plain English: If the wage is positive and prices are positive, then net outputs cannot be all nought and, therefore, there is a surplus of at least

• Adjoint equation, optimality condition 2.8.4 Optimal stationary temperature control problem. with a

In both the United States and the United Kingdom, inequality, as measured by the share of national income accruing to the top one per cent of the population, has risen to levels

Proof the Brouwer fixed point theorem under the assumption of the existence of a homology theory satisfying the dimension

The central concept for the modeling of carbon labeling systems (CLS) is that of an iso- topomer of a metabolite. If a certain metabolite has n carbon atoms then there are 2