Discretization-Optimization Methods for Nonlinear Parabolic Relaxed Optimal Control Problems with State Constraints
I. Chryssoverghi1, I. Coletsos1, J. Geiser2, B. Kokkinis1
(1) Department of Mathematics, School of Applied Mathematics and Physics National Technical University of Athens (NTUA)
Zografou Campus, 15780 Athens, Greece e-mail: ichris@central.ntua.gr
(2) Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Mohrenstrasse 39, D-10117 Berlin, Germany
e-mail: geiser@wias-berlin.de
Abstract
We consider an optimal control problem described by a semilinear parabolic partial differential equation, with control and state constraints, where the state constraints and cost involve also the state gradient. Since this problem may have no classical solutions, it is reformulated in the relaxed form. The relaxed control problem is discretized by using a finite element method in space involving numerical integration and an implicit theta-scheme in time for space approximation, while the controls are approximated by blockwise constant relaxed controls. Under appropriate assumptions, we prove that relaxed accumulation points of sequences of optimal (resp. admissible and extremal) discrete relaxed controls are optimal (resp. admissible and extremal) for the continuous relaxed problem. We then apply a penalized conditional descent method to each discrete problem, and also a progressively refining version of this method to the continuous relaxed problem. We prove that accumulation points of sequences generated by the first method are extremal for the discrete problem, and that relaxed accumulation points of sequences of discrete controls generated by the second method are admissible and extremal for the continuous relaxed problem. Finally, numerical examples are given.
Keywords. Optimal control, semilinear parabolic systems, state constraints, relaxed controls, discretization, θ-scheme, discrete penalized conditional descent method.
AMS Subject Classification: 49M25, 49M05, 65N30.
1 Introduction
We consider an optimal distributed control problem for systems governed by a semilinear parabolic partial differential equation, with control and state constraints, where the state constraints and cost involve also the gradient of the state. The problem is motivated, for example, by the control of a heat (or other) diffusion process whose source is nonlinear in the heat and temperature, with nonconvex cost and control constraint set (e.g. on-off type control). Since this problem may have no classical solutions, it is reformulated in the relaxed form, using Young measures. The relaxed problem is discretized by using a Galerkin finite element method with continuous piecewise linear basis functions in space and an implicit theta-scheme in time for space approximation, while the controls are approximated by blockwise constant Young measures. We first state the necessary conditions for optimality for the continuous problems, and then for the discrete relaxed problem. Under appropriate assumptions, we prove that relaxed accumulation points of sequences of optimal (resp. admissible and extremal) discrete relaxed controls are optimal (resp. admissible and extremal) for the continuous relaxed problem. We then apply a penalized conditional descent method to each discrete problem, which generates Gamkrelidze
controls, and also a corresponding discretization-optimization method to the continuous relaxed problem that progressively refines the discretization during the iterations, thus reducing computing time and memory. We prove that accumulation points of sequences generated by the fixed discretization method are extremal for the discrete problem, and that relaxed accumulation points of sequences of discrete controls generated by the progressively refining method are admissible and extremal for the continuous relaxed problem. Using a standard procedure, the computed Gamkrelidze controls can then be approximated by piecewise constant classical ones.
Finally, numerical examples are given. For approximation of nonconvex optimal control and variational problems, and of Young measures, see e.g. [2-9], [12-14] and the references there.
2 The continuous optimal control problems
Let Ω be a bounded domain in \d, with boundary Γ, and let I =(0, )T , , be an interval. Consider the semilinear parabolic state equation
T < ∞ (2.1) yt +A t y( ) +a x t0( , )T∇ +y b x t y x t w x t( , , ( , ), ( , ))= f x t y x t w x t( , , ( , ), ( , )) (2.2) in Q= Ω ×I,
(2.3) y x t( , ) 0= in Σ = Γ×I, y x( ,0)= y x0( ) in Ω,
where A t( ) is the formal second order elliptic differential operator (2.4)
1 1
( ) : d d ( / i)[ ( , )ij / j].
j i
A t y x a x t y x
= =
= −
∑∑
∂ ∂ ∂ ∂The constraints on the control are ( , )w x t ∈U in where U is a compact, not necessarily convex, subset of , the state constraints are
, Q
'
\d
(2.5) m( ) : m( , , , , ) 0,
G w =
∫
Qg x t y ∇y w dxdt= m=1,..., ,p (2.6) m( ) : m( , , , , ) 0,G w =
∫
Qg x t y ∇y w dxdt≤ m= +p 1,..., ,q and the cost functional to be minimized is(2.7) 0( ) 0( , , , , ) . G w =
∫
Qg x t y ∇y w dxdt Defining the set of classical controls(2.8) W: { : ( , )= w x t 6w x t w( , ) measurable from Q to },U
the continuous classical optimal control problem is to minimize subject to and to the above state constraints.
0( ) G w w W∈
Next, we define the set of relaxed controls (Young measures; for the relevant theory, see [20], [17])
(2.9) R: { := r Q→M U1( ) r weakly measurable}⊂L Q M U∞w( , ( ))≡L Q C U1( , ( ))*, where ( )M U (resp. M U1( )) is the set of Radon (resp. probability) measures on U. The set R is endowed with the relative weak star topology of L Q C U1( , ( ))*. The set R is convex, metrizable and compact. If we identify every classical control with its associated Dirac relaxed control
( ) w ⋅ ( ) w( )
r ⋅ =δ ⋅ , then W may be considered as a subset of R, and W is thus dense in R. For φ∈L Q C U1( , ( ))=L Q C U1( , ( )) (or
( , ; ) B Q U
φ∈ \ , where ( , ; )B Q U \ is the set of Caratheodory functions in the sense of Warga [20]) and r∈L Q M U∞w( , ( )) (in particular, for r∈R), we shall use the notation (2.10) ( , , ( , )) : ( , , ) ( , )( ),
x t r x t U x t u r x t du
φ =
∫
φand ( , , ( , ))φ x t r x t is thus linear (under convex combinations, for r R) in . A sequence ( converges to r in
∈ r
k)
r ∈R R iff
(2.11) lim ( , , ( , ))k ( , , ( , ))
Q Q
k φ x t r x t dxdt φ x t r x t dxdt
→∞
∫
=∫
,for every φ∈L Q C U1( ; ( )), or φ∈B Q U( , ; )\ , or φ∈C Q U( × ).
We denote by ⋅ the Euclidean norm in \n, by ( , )⋅ ⋅ and ⋅ the inner product and norm in L2( )Ω , by ( , )⋅ ⋅ Q and ⋅ Q the inner product and norm in , by and
2( ) L Q ( , )⋅ ⋅1 ⋅ 1 the inner product and norm in the Sobolev space , and by the duality bracket between the dual
1
: 0( V =H Ω) )
< ⋅ ⋅ >, V*=H−1(Ω and V . We also define the usual bilinear form associated with A t( ) and defined on V V×
(2.12)
1 1
( , , ) : d d ij( , ) .
j i i j
y v
a t y v a x t dx
x x
= = Ω
= ∂ ∂
∑∑∫
∂ ∂The relaxed formulation of the above control problem is the following. The relaxed state equation, interpreted in weak form, is
(2.13) < y vt, > +a t y v( , , ) ( ( )+ a t0 T∇y v, ) ( ( , , ), ) ( ( , , ), ),+ b t y w v = f t y w v ,
∀ ∈v V a.e. in I,
(2.14) y t( )∈V a.e. in I , y(0)= y0,
(the derivative is understood here in the sense of V -vector valued distributions), the control constraint is , and the state constraints and cost functionals are
yt
r∈R
(2.15) m( ) : m( , , , , ( , )) ,
G r =
∫
Qg x t y ∇y r x t dxdt m=0,..., .qThe continuous relaxed optimal control problem is to minimize subject to the constraints
0( ) G r (2.16) r∈R, G rm( ) 0≤ , m=1,...,p, G rm( ) 0= , 1,...,m= +p q.
In the sequel, we shall make some of the following assumptions.
Assumptions 2.1 Γ is Lipschitz if b=0; else, Γ is C1 and n≤3. Assumptions 2.2 The coefficients aij satisfy the ellipticity condition
(2.17) 0 2
1 1 1
( , ) ,
d d d
ij i j i
j i i
a x t z z α z
= = =
∑∑
≥∑
∀z zi, j∈\, ( , )x t ∈Q,with α0 >0, aij∈L Q∞( ), which implies that
(2.18) a t y v( , , ) ≤α1 y 1 v 1, a t v v( , , )≥α2 v 12, ∀y v V, ∈ , t∈I, for some α1≥0,α2>0.
Assumptions 2.3 a0∈L Q∞( )d, and the functions b,f are defined on measurable for fixed , continuous for fixed
, Q× ×\ U ,
y u x t, , and satisfy the conditions (2.19) b x t y u( , , , ) ≤φ( , )x t +β y2, b x t y u y( , , , ) ≥0,
(2.20) f x t y u( , , , ) ≤ψ( , )x t +γ y ,
( , , , )x t y u Q U,
∀ ∈ ×\×
(2.21) f x t y u( , , , )1 − f x t y u( , , , )2 ≤L y1− y2 , ∀( , , , , )x t y y u1 2 ∈ ×Q \2×U, (2.22) b x t y u( , , , )1 ≤b x t y u( , , , )2 , ∀( , , , , )x t y y u1 2 ∈ ×Q \2×U , with y1 ≤y2, where φ ψ, ∈L Q2( ), β γ, ≥0
U .
Assumptions 2.4 The functions gm are defined on Q×\d+1× , measurable for fixed , continuous for fixed
,
y u x t, , and satisfy
(2.23) gm( , , , , )x t y y u ≤ζm( , )x t +δmy2+δm y2, ∀( , , , , )x t y y u ∈ ×Q \d+1×U, with ζm∈L Q1( ), δm≥0, δm≥0.
Assumptions 2.5 The functions b b, ,y fy (resp. gm,gmy,gmy) are defined on Q U
(resp. ), measurable on for fixed ( (resp.
× ×\
1
Q×\d+ ×U Q y u, )∈ ×\ U
( , , )y y u ∈\d+1×U ) and continuous on \×U (resp. \d+1×U ) for fixed ( , )x t ∈Q, and satisfy
(2.24) b x t y uy( , , , ) ≤ξ( , )x t +η y, fy( , , , )x t y u ≤L1, ( , , , )x t y u Q U
∀ ∈ ×\× ,
(2.25) gmy( , , , , )x t y y u ≤ζm1( , )x t +δm1 y +δm1 y , (2.26) gmy( , , , , )x t y y u ≤ζm2( , )x t +δm2 y +δm2 y ,
( , , , , )x t y y u Q d+1 U,
∀ ∈ ×\ ×
m m L Q
ξ ζ ζ ∈
with , 1, 2 2( ), η δ δ δ δ, m1, m1, m2, m2 ≥0.
The following theorem can be proved by monotonicity and compactness arguments (for continuous b f, and y0∈V, see also a proof contained in Theorem 3.1 and Lemma 4.2 below).
Theorem 2.1 Under Assumptions 2.1-3, for every control r∈R and (or ), the relaxed state equation has a unique solution such that
, . Moreover, is essentially equal to a function in
0 2( )
y ∈L Ω
y0∈V y:=yr
2( , )
y∈L I V yt∈L I V2( , *) y ( , ( ))2
C I L Ω , and thus the initial condition is well defined.
The following lemma and theorem can be proved by using the techniques of [5], [7], [17].
Lemma 2.1 Under Assumptions 2.1-3, the operator r6 yr, from R to , and to if . Under Assumptions 2.1-4, the functionals ,
, from
2( , ) L I V
2( , ( ))4
L I L Ω b≠0 r6G rm( )
0,...,
m= q R to , are continuous. \
It is well known that, even if the control set U is convex, the classical problem may have no classical solutions. But we have anyway the following theorem stating the existence of an optimal relaxed control.
Theorem 2.2 Under Assumptions 2.1-4, if there exists an admissible control (i.e.
satisfying all the constraints), then there exists an optimal relaxed control.
Since W ⊂R, we generally have
(2.27) 0 0 ,
constraints on constraints on
: min ( ) inf ( ) :
R W
r w
c = G r ≤ G w =c
where the equality holds, in particular, if there are no state constraints, as W is dense in R. Since usually approximation methods slightly violate the state constraints, approximating an optimal relaxed control by a relaxed or a classical control, hence the possibly lower relaxed optimal cost , is not a drawback in practice (see [20], p.
259).
cR
The following lemma and theorem can be proved by using the techniques of [5], [7], [20] (see also [11]).
Lemma 2.2 Under Assumptions 2.1-5, dropping the index in the functionals, the directional derivative of is given, for
m G r r, '∈R, by
(2.28)
0
( ( ' )) (
( , ' ) : lim G r r r G r DG r r r
ε
ε ) ε
→ +
+ − −
− =
( , , , , , '( , ) ( , )) ,
QH x t y y z r x t r x t dxdt
=
∫
∇ −where the Hamiltonian H is defined by
(2.29) H x t y y z u( , , , , , ) :=z f x t y u[ ( , , , )−b x t y u( , , , )]+g x t y y u( , , , , ), and the adjoint state z=zr satisfies the linear adjoint equation (2.30) − <z vt, > +a t v z( , , ) (+ a0T∇v z, ) (+ zb y r vy( , ), )
(zfy( , )y r gy( , ), )y r v ( ( ,gy y y r, ), v),
= + + ∇ ∇ ∀ ∈v V, a.e. in I,
(2.31) z t( )∈V a.e. in I, z T( ) 0= ,
with y= yr. The mappings r6zr, from R to , and ( , , from
2( )
L Q r r')6DG r r( , '−r) R×R to , are continuous. \
Next, we state the relaxed necessary conditions for optimality.
Theorem 2.3 Under Assumptions 2.1-5, if r∈R is optimal for either the relaxed or the classical optimal control problem, then is extremal, i.e. there exist multipliers r
λm∈\, m=0,...,q, with λ0 ≥0, 0λm≥ , 1,...,m= +p q,
0
1
q m m
λ
=
∑
= , such that(2.32)
0
( , ' ) 0,
q
m m
m
DG r r r λ
=
∑
− ≥ ∀ ∈r' R,(2.33) ( ) 0,λmG rm = m= +p 1,...,q (transversality conditions).
The global condition (2.32) is equivalent to the strong relaxed pointwise minimum principle
(2.34) ( , , ( , ), ( , ), ( , ), ( , )) min ( , , ( , ), ( , ), ( , ), ),
u U
H x t y x t y x t z x t r x t H x t y x t y x t z x t u
∇ = ∈ ∇
a.e. in Q,
where complete Hamiltonian and adjoint H z, are defined with .
0
:
q
m m
m
g λ g
=
=
∑
Remark. In the absence of equality state constraints, it can be shown that, if the optimal control is regular, i.e. there exists r r'∈R such that
(2.35) G rm( )+DG r rm( , '− <r) 0, m= +p 1,...,q,
(Slater condition), then λ0 ≠0 for any set of multipliers as in Theorem 2.2.
3 The discrete optimal control problems
Assumptions 3.1 Γ is appropriately piecewise C1 if b=0, Γ is and if , is independent of t (for simplicity) and symmetric if
C1 n≤3 0
b≠ a t y v( , , ) θ ≠1 in the θ-
scheme below, the functions a b b b0, , ,y u,f f, ,y fu, are continuous (possibly finitely piecewise in t) on the closure of their domains of definition, and
.
0, , , ,y a b b f fy y0∈V
Under Assumptions 3.1, for each integer n≥0, let Ωn be a subdomain of Ω with polyhedral boundary such that , an admissible regular quasi-uniform triangulation of
Γn dist( , )Γ Γ =n o h( )n { }Ein Mi=1n
Ωn into closed d-simplices (elements), with as n , and
max [diam( )] 0
n n
i i
h = E → → ∞ { }Inj Nj=n1, a subdivision of the interval I into closed intervals Inj =[tnj−1,tnj], of equal length ∆tn, with ∆ →tn 0 as n . We define the blocks . Let be the subspace of functions that are continuous on
→ ∞
n n
ij i j
Q =E ×In Vn ⊂V
Ω, are linear (i.e. affine) on each Ein, and vanish on Ω − Ωn. Let be any given fixed point in U. The set of discrete classical controls is the subset of classical controls that are constant on the interior of each block and equal to on
u0
Wn ⊂W
n
Qij
u0 Q− ×Ω(I n). The set of discrete relaxed controls Rn ⊂R is the subset of relaxed controls that are equal to a constant measure in M U1( ) on the interior of each block Qijn and equal to δu0 on Q− ×Ω(I n). The set Rn is endowed with the relative weak star topology of M U( )MN. Clearly, we have . For implementation reasons, one could alternatively use a coarser partition for the discrete controls, that is, use discrete relaxed controls that are constant on hyperblocks
, where the are appropriate unions of some elements and Wn ⊂Rn
'n
' ' ' '
'ni j 'ni j
Q =E ×I E'in' Ein I'nj'
are appropriate unions of some intervals Inj.
For a given discrete control rn∈Rn, and θ∈[1/ 2,1] if b=0, θ =1 if , the corresponding discrete state is given by the discrete state equation (implicit
0 b≠ : ( ,...,0 )
n n n
y = y yN
θ-scheme)
(3.1) (1/∆tn)(ynj −ynj−1, )v +a y( njθ, ) (v + a0T( )tnjθ ∇yjθ, ) ( ( ,v + b tnjθ ynjθ, ), )rjn v ( ( ,f tnjθ ynjθ, ), ),rjn v
= for every v V∈ n, j=1,..., ,N
(3.2) (y0n −y v0, )1=0, for every v V∈ n, ynj∈Vn, j=1,...,N, where we set
(3.3) ynjθ : (1= −θ)ynj−1+θynj, tnjθ : (1= −θ)tnj−1+θtnj.
Theorem 3.1 Under Assumptions 2.2-3 and 3.1, if ∆ ≤tn c' (resp. ), for some ' sufficiently small, independent of and , if
'( )2
tn c h
∆ ≤ n
c n rn b=0 (resp. b≠0), then, for
every and every control , the discrete state equation has a unique solution such that
n rn yn
n
yj ≤c, 0,...,j= N , with independent of c n and . rn
Proof (sketch). Suppose first that b≠0, θ =1. Lemma 4.1 below shows that, if the solution ynj exists for every j, then ynj ≤c, 0,...,j= N , with c independent of and , for as in the assumptions. Suppose by induction that exists for
. Then the solution is a fixed point of the mapping
n
rn ∆tn ykn
1
k≤ −j ynj z=F yθ( ) (here with
θ =1), where is the solution, for given, of the equations z y
(3.4) (1/∆tn)(z−ynj−1, )v +a(θz+ −(1 θ)ynj−1, ) (v + a0T( ) (tnjθ ∇θy+ −(1 θ)ynj−1), )v )
n n n n n
j j j j j
b tθ θy θ y − r v f tθ θy θ y − r v
+ + − = + − ∀ ∈v Vn
4
1 1
( ( , (1 ) , ), ) ( ( , (1 ) , ), ,jn , which reduce (choosing a basis in ) to a regular linear system in . We then show (using our assumptions, the continuous injection , and the inverse inequality, see [10]) that
Vn z
1
H0⊂L ( ) 2
z = F yθ ≤ c, if y ≤2c, for ∆tn as above, i.e. Fθ maps the closed ball (0, 2 )B c of center 0 and radius 2 in into itself. Moreover, one can see (using also the mean value theorem for b) that
c Vn
Fθ is also contractive in this ball, for ∆tn as above. Therefore Fθ has a unique fixed point in (0, 2 )B c , which is the solution ynj. If b=0, [1/ 2,1]θ∈ , one can easily see that, by the Lipschitz continuity of f in , the mapping y Fθ is contractive on the whole space , for sufficiently small; hence
Vn ∆tn Fθ has a unique fixed point in Vn.
The solution can be computed by the predictor-corrector method, using the linearized semi-implicit predictor scheme, i.e. with or .
n
yj
0
: ( 1) (0, 2
n
j j
y =F yθ − ∈B c)
n
Vn
The discrete control constraint is rn∈R and the discrete functionals are
(3.5) m q
1
0
( ) : ( , , , ) ,
N
n n n n n n n
m m j j j j
j
G r t g tθ yθ yθ r dx
−
= Ω
= ∆
∑∫
∇ =0,..., .The discrete state constraints are either of the two following ones (3.6) Case (a) G rmn( )n ≤εmn, m=1,..., ,p
(3.7) Case (b) G rmn( )n =εmn, m=1,..., ,p and
(3.8) ( )G rmn n ≤εmn, εmn ≥0, m= +p 1,..., ,q
where the feasibility perturbations εmn are given numbers converging to zero, to be defined later. The discrete cost functional to be minimized is G r0n( )n .
Theorem 3.2 Under Assumptions 2.2-4 and 3.1, the mappings and , defined on
n
r 6 ynj
( )n
n n
r 6G rm Rn, are continuous. If any of the discrete problems is feasible, then it has a solution.
Proof. The continuity of the operators rn 6 ynj is easily proved by induction on j (or by using the discrete Bellman-Gronwall inequality, see [18]). The continuity of
( )
n n
r →G rm n follows from the continuity of . The existence of an optimal control follows then from the compactness of
gm
Rn.
The proofs of the following lemma and theorem parallel the continuous case and are omitted.
Lemma 3.2 We drop the index m in and . Under Assumptions 2.2-5 and 3.1, for , the directional derivative of the functional is given by
gm Gmn , '
n n
r r ∈Rn Gn
(3.9) 1 ,1
0
( , ' ) N ( , , , , ' ) ,
n n n n n n n n n n n
j j j j j j
j
DG r r r t H tθ yθ yθ z θ r r dx
− Ω −
=
− = ∆
∑∫
∇ −where the discrete adjoint zn is given by the linear adjoint scheme
(3.10) −(1/∆tn)(znj −znj−1, )v +a v z( , nj,1−θ) (+ a0T∇v z, nj,1−θ) (+ znj,1−θb ty( ,njθ ynjθ, ), )rjn v (znj,1−θ f ty( ,njθ ynjθ, )rjn g ty( ,njθ ynjθ, ynjθ, ), ) ( ( ,rjn v g ty njθ ynjθ, ynjθ, ),rjn v)
= + ∇ + ∇ ∇ ,
, v Vn
∀ ∈ j=N,...,1, zNn =0, znj∈Vn,
which has a unique solution znj−1 for each j, for ∆tn sufficiently small. Moreover, the mappings rn 6zn and are continuous.
n
( , ' )r rn n 6DG r rn( , 'n n−rn)
Theorem 3.3 Under Assumptions 2.2-5 and 3.1, if rn∈R is optimal for the discrete problem with state constraints, Case (b), then it is extremal, i.e. there exist multipliers
, , with
n
λm∈\ m=0,...,q
(3.11) , , λmn ≥0 λmn ≥0 m= +p 1,...,q,
0
1
q n m m
λ
=
∑
= , such that(3.12) ,1
0 1
( , ' ) ( , , , , ' ) 0,
q N
n n n n n n n n n n n n n
m m j j j j j j
m j
DG r r r t H tθ yθ yθ z θ r r dx
λ −
= = Ω
− = ∆ ∇ − ≥
∑ ∑∫
'n n,
r R
∀ ∈
(3.13) [λmn G rm( )n −εmn] 0,= m= +p 1,..., ,q
where and are defined with . The global condition (3.12) is equivalent to the strong discrete blockwise minimum principle
Hn zn
0
:
q n
m m
m
g λ
=
=
∑
g(3.14) n( ,nj nj , nj , nj,1 , )ijn min n( ,nj nj , nj , nj,1 , ) ,
u U
H t yθ yθ z −θ r dx H tθ yθ yθ z −θ u dx
Ω ∇ = ∈ Ω ∇
∫ ∫
1,...,
i= M , j=1,..., .N
4 Behavior in the limit
The following control approximation result is proved in [4].
Proposition 4.1 Under Assumptions 3.1 on Γ, for every r∈R, there exists a sequence (wn∈Wn) that converges to r inR.
Lemma 4.1 (Stability) Under Assumptions 2.2-3 and 3.1, if ∆t is sufficiently small, for every , we have the following inequalities, where the constants c are independent of and
rn∈Rn n rn (4.1) ykn ≤c, k =0,..., ,N
(4.2) 1 2
1
,
N
n n
j j
j
y y − c
=
− ≤
∑
(4.3) 2
1 1
,
N
n n
j j
t yθ
=
∆
∑
≤c(4.4) 2
0 1 N
n n
j j
t y
=
∆
∑
≤c, (under the condition , for some constant C independent of , if( )2
n n
t C h
∆ ≤ n θ =1/ 2),
(4.5) 1 12
1 N n
j j
j
t y y −
=
∆
∑
− ≤cy , (with the condition ∆ ≤tn C h( )n 2).
Proof. Dropping the index for simplicity of notation, setting n v= ∆2θ t j in the discrete equation, and using our assumptions on , we then have (if , then
, , ,0
a a b f b≠0
θ =1 and b y y( )j j ≥0)
(4.6) θ( yj −yj−1 2+ yj 2− yj−1 2) ( (+ b tjθ,yjθ, ), ) rj yj
2 2
1 1
[ ( j , j ) ( , ) (1j j ) ( j , j )]
t a yθ yθ θ a y y θ a y− y −
+∆ + − −
2θ t f t( ( ,jθ yjθ, ), ) 2rj yj θ t a0 ∞ yjθ yj
≤ ∆ + ∆ ∇
(1 j j 1)
c t yθ yθ y
≤ ∆ + + j
2 2
1
[1 j j (1 1) ]
c t yθ β yθ y
≤ ∆ + + + +β j 2 hence, taking 2
2c
β ≤α , we get
(4.7) θ( yj −yj−1 2+ yj 2− yj−1 2)
2 2 2
2 1 1 1
[ ( , ) (1 ) (
2 j j j j
t α yθ θ a y y θ a y −,yj−)]
+∆ + − −
2 2 2
(1 j j ) (1 j 1 j )
c t yθ y c t y− y
≤ ∆ + + ≤ ∆ + + 2
2 2
1 1
(1 j j j ),
c t y − y y −
≤ ∆ + + −
and if in addition t 2
c
∆ ≤ θ
(4.8) 1 1 2 2 1
( )
2 yj yj yj yj
θ − − + − − 2
2 2 2 2
2
1 1 1
[ 1 ( , ) (1 ) ( , )] (1
2 j j j j j j
t α yθ θ a y y θ a y − y − c t y− ).
+∆ + − − ≤ ∆ +
By summation over j=1,...,k, we obtain, for θ >1/ 2
(4.9) 1 2 2 2 2 2
1 1
1 1
( 1 )
2 2
k k
j j k j j
j j
y y y α t yθ tc y
θ − α
= =
− + + ∆ + ∆
∑ ∑
21
'
k
∑
j=2 2 2 2
0 1 0 1 1
1
(1 ) k (1 j ),
j
y t y c t y
θ α θ −
=
≤ + ∆ − + ∆
∑
+ with c' 0,>and for θ =1/ 2
(4.10) 1 2 2 2 2
1 1 1
1 1
( )
2 2 2
k k
j j k j
j j
y y y α t yθ
= − =
− + + ∆
∑ ∑
2 2 2
0 1 0 1 1
1
1 (1 ).
2 4
k
j j
y α t y c t y −
=
≤ + ∆ + ∆
∑
+Since y0 1, hence y0 , remains bounded, using the discrete Bellman-Gronwall inequality (see 18]), we obtain inequality (i). The inequalities (4.2), (4.3), and (4.4) if θ >1/ 2, follow. By the inverse inequality (see [10]), the condition , and inequality (4.2), we get inequality (4.5)
( )2
tn C h
∆ ≤ n
(4.11) 1 2 2 1 2
1 1 1 1
.
N N N
j j j j j j
j j j
t y y t y y C y y c
− h −
= = =
∆
∑
− ≤ ∆∑
− ≤∑
− −1 2 ≤If θ =1/ 2, inequality (4.4) follows from inequalities (4.3) and (4.5).
For given values in a vector space, we define the piecewise constant and continuous piecewise linear functions
0,..., N
v v
(4.12) v t−( ) :=vj−1, v t+( ) :=vj, v tθ( ) : (1= −θ)vj−1+θvj, t∈Ionj, j=1,..., ,N (4.13) ^( ) : 1 1( j 1),
n j
j n j
t t
v t v v v
t
−
− −
= + − −
∆ , 1,..., .
n
j j N
∈ =
t I
If b=0 (resp. b≠0), we suppose in the sequel that ∆ ≤tn C (resp. ), with sufficiently small, so as to guarantee the results of Theorem 3.1 and Lemma 4.1.
( )2
n n
t C h
∆ ≤ C
Lemma 4.2 (Consistency of states and functionals) Under Assumptions 2.2-3 and 3.1, if rn →r in R, then the corresponding discrete states y^n,y+n,y+n,y+n converge to
in (resp ) strongly if
yr 2( , ( ))4
L I L Ω L Q2( ) b≠0 (resp. b=0), in strongly, and
n
yθ →yr L I V2( , ) (4.14) lim mn( )n m( ),
n G r G r
→∞ = m=0,..., .q
Proof. By Lemma 4.1 (inequality (4.2) multiplied by ∆t), 0y+n−y−n → in strongly. Since, by inequality (4.3) in Lemma 3.3,
2( ) L Q y−n and yn+ are bounded in , it follows that is also bounded in . By extracting subsequences,
we can suppose that and in weakly (hence in
weakly), for the same . The discrete state equation can be written in the form
2( , )
L I V y^n L I V2( , )
^
yn → y yθn →y L I V2( , ) L Q2( ) y
(4.15) d ( ( ), ) (^n n n( ), )n 1
y t v t v
dt = ψ , ∀ ∈vn Vn, a.e. in (0, ),T
in the scalar distribution sense, where the piecewise constant function ψn is defined, using Riesz’s representation theorem, by
(4.16) (ψnj( ), ) :t vn 1 = −a y( njθ, ) ( ( )vn − a t0 njθ T∇ynjθ, ) ( ( ,vn − b tnjθ ynjθ, ), )rjn vn ( ( ,f tnjθ ynjθ, ), ),rjn vn
+ in
o n
Ij , 1,..., .j= N