• Keine Ergebnisse gefunden

Robust Stochastic Stability of Discrete-Time Markovian Jump Neural Networks with Leakage Delay

N/A
N/A
Protected

Academic year: 2022

Aktie "Robust Stochastic Stability of Discrete-Time Markovian Jump Neural Networks with Leakage Delay"

Copied!
11
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Robust Stochastic Stability of Discrete-Time

Markovian Jump Neural Networks with Leakage Delay

Mathiyalagan Kalidassa, Hongye Sua, and Sakthivel Rathinasamyb,c

a National Laboratory of Industrial Control Technology, Institute of Cyber-System and Control, Zhejiang University, Yuquan Campus, Hangzhou 310027, China

b Department of Mathematics, Sungkyunkwan University, Suwon-440 746, South Korea

c Department of Mathematics, Sri Ramakrishna Institute of Technology, Coimbatore-641 010, India

Reprint requests to S. R; E-mail:krsakthivel@yahoo.com

Z. Naturforsch.69a,70 – 80 (2014) / DOI: 10.5560/ZNA.2013-0078

Received May 17, 2013 / revised September 18, 2013 / published online December 18, 2013 This paper presents a robust analysis approach to stochastic stability of the uncertain Markovian jumping discrete-time neural networks (MJDNNs) with time delay in the leakage term. By choos- ing an appropriate Lyapunov functional and using free weighting matrix technique, a set of delay dependent stability criteria are derived. The stability results are delay dependent, which depend on not only the upper bounds of time delays but also their lower bounds. The obtained stability criteria are established in terms of linear matrix inequalities (LMIs) which can be effectively solved by some standard numerical packages. Finally, some illustrative numerical examples with simulation results are provided to demonstrate applicability of the obtained results. It is shown that even if there is no leakage delay, the obtained results are less restrictive than in some recent works.

Key words:Discrete-Time Neural Networks; Stochastic Stability; Leakage Delays; Linear Matrix Inequality; Markovian Jump Systems.

1. Introduction

In the past two decades, neural networks have re- ceived considerable attention and have been success- fully applied in many areas such as image process- ing, pattern recognition, associative memories, signal processing, optimization solvers, and so on [1–3]. In particular, these applications heavily depend on the stability of the equilibrium of neural networks. More precisely, the stability analysis is a necessary step for the applications of neural networks. In practice, the discrete-time neural networks become more impor- tant than the continuous-time counterparts when im- plementing the neural networks in a digital way. Some important results related to the stability issue have been reported for discrete-time neural networks in [4–12].

On the other hand, both in biological and artificial neu- ral networks, the interactions between neurons are gen- erally asynchronous which inevitably result in time delays. In electronic implementations of neural net- works, the delays are usually time-varying due to the

finite switching speed of amplifiers. It is known that the existence of time-delay is often a source of in- stability, oscillation, and poor performance in neu- ral networks [13]. Therefore, it is of both theoretical and practical importance to study the problem of sta- bility analysis of discrete-time neural networks with time-varying delays. Moreover, neural networks with Markovian jumping parameters are of great signifi- cance in modelling a class of neural networks with fi- nite network modes. Many interesting results about the stability of time-delayed neural networks with Marko- vian jumping parameters have been proposed via the linear matrix inequality (LMI) approach [14–19].

Moreover, in studying the applications of neural net- works, the connection weights of the neurons depend on certain resistance and capacitance values which in- clude uncertainties. Thus, the stability problem of un- certain discrete-time neural networks has got much at- tention, and many interesting results have been pro- posed to guarantee the asymptotic, exponential and stochastic stability analysis of uncertain neural net-

© 2014 Verlag der Zeitschrift für Naturforschung, Tübingen·http://znaturforsch.com

(2)

works with various types of time delays [20–22].

However, the conditions imposed on the neural net- work are determined by the characteristics of activa- tion functions as well as network parameters. Thus, the generalization of activation functions will provide a wider scope for neural network design and applica- tions [23].

Further, the time delay in the leakage term has also a great impact on the dynamical behaviour of neural networks. Thus, the stability analysis of neural net- works with time delay in leakage also has primary sig- nificance in the research of neural networks. Although the stability analysis of neural networks has received much attention, the stability of stochastic neural net- works with time delay in leakage has not been fully investigated. So far, few results have been reported for the stability analysis of neural networks with time de- lay in leakage (or “forgetting”) term [24–27]. In [28], the delay dependent sufficient conditions have been de- rived for the existence of a unique equilibrium as well as its asymptotic and exponential stability conditions for the bidirectional associative memory (BAM) neu- ral networks with constant delay in the leakage term and pointed out that time delay in the stabilizing neg- ative feedback term could destabilize a system. Re- cently, Balasubramaniam et al. [29] studied the delay- dependent passivity analysis for Markovian jumping neural networks of neutral type with time delay in the leakage term and distributed delays. It is shown via simulation results that the neural network is stable when there is no leakage delay and it becomes unsta- ble when a small amount of leakage delay occurs. This indicates that the stability of neural networks may be affected due to the sensitiveness of the time delay in the leakage term. All the above mentioned results are derived for the continuous-time neural networks, to the best of authors knowledge, no result has been reported for the discrete-time case with time delay in the leak- age term.

Motivated by the above discussion, in this paper we consider the stochastic stability analysis for Markovian jumping discrete-time neural networks (MJDNNs) with time delays in the leakage term. By employ- ing a Lyapunov–Krasovski functional (LKF) and LMI technique, some novel delay-dependent conditions are obtained to ensure the stochastic stability for a class of MJDNNs in the presence of time delays in the leakage term. Further, the result is extended to derive the stability conditions for uncertain discrete-time neu-

ral networks in which the parameter uncertainties are assumed to be time-varying but norm bounded. The derived conditions are established in terms of LMIs which can be easily calculated by MATLAB LMI tool- box. In particular, it is shown that in the absence of a leakage delay, the obtained results are less conserva- tive than some recent works. Finally, some numerical examples and their simulations are given to show the effectiveness of the obtained results.

Notations:Throughout this paper, unless otherwise specified, the superscripts ‘T’ and ‘−1’ stand for ma- trix transposition and matrix inverse respectively;Rn×n denotes then×n-dimensional Euclidean space; the no- tationP>0 means thatPis real, symmetric, and posi- tive definite; diag{·} denotes the block-diagonal ma- trix; we use an asterisk (∗) to represent a term that is induced by symmetry. Matrices which are not ex- plicitly stated are assumed to be compatible for matrix multiplications. Zdenotes the set including zero and positive integers. Moreover, let(Ω,F,{Ft}t≥0,P)be a complete probability space with a filtration{Ft}t≥0

satisfying the usual conditions (i.e., the filtration con- tains allP-null sets and is right continuous).E[·]stands for the mathematical expectation with respect to the given probability measureP.

2. Problem Formulation and Preliminaries

Consider the following discrete-time neural network with leakage delay and Markovian jumping parame- ters;

x(k+1) = (A(rk) +∆A(k,rk))x(k−σ) + (B(rk) +∆B(k,rk))f(x(k)) + (C(rk)

+∆C(k,rk))f(x(k−τ(k))),

x(k) =φ(k) for everyk∈[−τ,0], τ=max[τM,σ], (1)

where x(k) ∈ Rn is the state vector; A(rk) = diag{a1(rk),a2(rk), . . . ,an(rk)} is the positive diag- onal matrix that represents the self-feedback term with |a(rk)| <1; B(rk) = [b(rk)]n×n and C(rk) = [c(rk)]n×n are known real constant matrices with ap- propriate dimensions; f(x(k)) = [f1(x1(k)),f2(x2(k)), . . . ,fn(xn(k))]Tdenotes the neuron activation function.

The positive integersσandτ(k)denote the leakage de- lay and time-varying discrete delay, respectively,τ(k) satisfiesτm≤τ(k)≤τMwhereτmandτMare positive integers.

(3)

72 M. Kalidass et al.·Robust Stochastic Stability of Discrete-Time Markovian Jump Neural Networks Let S={1,2, . . . ,N} and {rk,k∈Z} be a homo-

geneous, finite-state Markovian process with right continuous trajectories on the probability space (Ω,F,{Fk}k≥0,P)with generatorΠ = [πi j]n×n, i,jS given by Pr(rk+∆k= j/rk=i) =πi j, where πi j is the transition rate from modei to mode j, satisfying πi j≥0 fori,jSand∑Nj=1πi j=1.

Further, rk=i,iS,Ai=A(rk),Bi=B(rk),Ci= C(rk),and we assume that the parameter uncertainties

∆Ai=∆A(k,rk),∆Bi=∆B(k,rk), and∆Ci=∆C(k,rk) are time-varying and described by

[∆Ai ∆Bi ∆Ci] =EiFi(k) [N1i N2i N3i], (2) whereN1i,N2i,N3i, andEiare known constant matri- ces of appropriate dimensions and Fi(k)is an known time-varying matrix with Lebegue measurable ele- ments bounded byFiT(k)Fi(k)≤I.

Assumption 1 ([30]). Fors∈ {1,2, . . . ,n}, the neu- ron activation functions fs(·)in (1) are continuous and bounded, satisfying

Fsfs(x1)−fs(x2)

x1−x2Fs+x1,x2∈R, x16=x2, whereFsandFs+are constants.

The following definition and lemmas will be used in the proof of our main results.

Definition 1. For every initial condition (φ,r0), the MJDNNs (1) is said to be stochastically stable if sat- isfying the following condition:

E

"

k=0

kxk(φ,r0)k2

#

<∞,

where xk(φ,r0) denotes the solution of (1) at time k under the initial conditionsφandr0.

Lemma 1([31]). For any symmetric constant matrix Z∈Rn×n, Z≥0,two scalarsτmandτMsatisfyingτm<

τM, and a vector valued function η(k) =x(k+1)− x(k),(k∈Z+), we have

k−τm−1 s=k−τ

M

ηT(s)Zη(s)≤−1 τ2

k−τm−1 s=k−τ

M

ηT(s)Z

k−τm−1 s=k−τ

M

η(s), whereτ2M−τm.

Lemma 2([22]). For any vector x,y∈Rn, matrices A,P,D,E, and F are real matrices of appropriate di- mensions with P>0,FTFI, and scalerε>0, the following inequalities hold:

(i)2xTDFEy≤ε−1xTDDTx+εyTETEy ,

(ii) If P−εDDT > 0, then (A+DFE)TP−1(A+ DFE)AT(P−εDDT)−1A−1ETE .

3. Stability Results

In this section, we derive the conditions for stochas- tic stability of MJDNNs in the presence of time delay in the leakage term. First, we study the stochastic sta- bility of MJDNN (1) when the uncertainties∆Ai=0,

∆Bi=0, and∆Ci=0. For presentation convenience, we denote

F1=diag

F1F1+,F2F2+, . . . ,FnFn+ , F2=diag

F1+F1+

2 ,F2+F2+

2 , . . . ,Fn+Fn+ 2

.

Theorem 1. Under the Assumption1, the MJDNN (1) without uncertainties is stochastically stable if there exist matrices Pi > 0,Qp >0,Q4 >0,Ru>

0,u=1,2,3,4;Zv>0,v=1,2,3 and diagonal ma- trices Dq>0, q=1,2,and Ml, l=1,2,3,4,5, with appropriate dimensions such that the following LMIs hold for all i,j∈S,i6= j:

Π=

 Π16,16

√ τMM1

√ τ2M2

√ τ2M3

∗ −Z1 0 0

∗ ∗ −Z1Z2 0

∗ ∗ ∗ −Z2

<0, (3)

where Mj = [MTj1 MTj2 0 MTj3 0 MTj4 0n,10n]T, j = 1,2,3,4,5,

Pi=

P11i P12i P13i P14i

P22i P23i P24i

∗ ∗ P33i P34i

∗ ∗ ∗ P44i

 ,

Qp=

Qp1 Qp2

Qp3

, p=1,2,3,5,π¯=

N

j=1

πi j,

Π1,1=Q11+ (τ2+1)Q21+Q31+Q51P11im2R1MR22R3+σR4F1D1+π¯P11j

−2M51+2M11−1 σZ3,

(4)

Π1,2=πP¯ 12j−P12i+ 1

σZ3+M12T +M51AiM52T , Π1,3=πP¯ 13j−P13iM41+M31,

Π1,4=M41M54T −M11+M13T +M21−M31, Π1,5=πP¯ 14j−P14iM21, Π1,6=πP¯ 11j+Q12

+ (τ2+1)Q22+Q32M51−M53T +M14T, Π1,7=πP¯ 12j, Π1,8=πP¯ 13j, Π1,9=πP¯ 14j,

Π1,11=F2D1+M51Bi, Π1,12=M51Ci, Π1,13=M41, Π1,14=−M41, Π1,15=M41,

Π2,2=πP¯ 22j−P22iQ51−1

σZ3+M52Ai+ATiM52T , Π2,3=πP¯ 23j−P23iM42+M32,

Π2,4=M42M12−ATiM54T +M22−M32, Π2,5=πP¯ 24j−P24iM22,

Π2,6=πP¯ 12jT +ATiM51TM52,

Π2,7=πP¯ 22j, Π2,8=π¯P23j, Π2,9=π¯P24j, Π2,11=M52Bi, Π2,12=M52Ci, Π2,13=M42, Π2,14=−M42, Π2,15=M42,

Π3,3=πP¯ 33j−P33iQ11, Π3,4=M33TM43T , Π3,5=πP¯ 34j−P34i, Π3,6=π¯P13jT −M44T +M34T , Π3,7=πP¯ 23jT , Π3,8=π¯P33j−Q12, Π3,9=πP¯ 34j, Π4,4=2M43−2M13+2M23−2M33−Q21F1D2, Π4,5=−M23, Π4,6=M44TM54−M14T+M24TM34T, Π4,9=−Q22, Π4,11=M54Bi, Π4,12=F2D2+M54Ci, Π4,13=M43, Π4,14=−M43, Π4,15=M43,

Π5,5=πP¯ 44j−P44iQ31, Π5,6=π¯P14TjM24T, Π5,7=πP¯ 24jT , Π5,8=π¯P34Tj, Π5,9=π¯P44j, Π5,10=−Q32, Π6,6=π¯P11j+Q13+ (τ2+1)Q23

+Q33+Q532Z1MZ2Z3−2M53, Π6,7=πP¯ 12j, Π6,8=π¯P13j, Π6,9=π¯P14j, Π6,11=M53Bi, Π6,12=M53Ci6,13=M44, Π6,14=−M44, Π6,15=M44, Π7,7=πP¯ 22jQ53, Π7,8=πP¯ 23j, Π7,9=π¯P24j, Π8,8=π¯P33jQ13, Π8,9=πP¯ 34j, Π9,9=π¯P44j−Q23, Π10,10=−Q33, Π11,11= (τ2+1)Q4D1, Π12,12=−Q4D2, Π13,13=−R114,14=− 1

τM

R2,

Π15,15=−1 τ2

R3, Π16,16=−1 σR4, and the remaining terms are zero.

Proof. In order to prove the stochastic stability results, we consider the following LKF:

V(k) =

5 r=1

Vr(k), (4)

where

V1(k) =ξ1T(k)Piξ1(k), V2(k) =

k−1 s=k−τ(k)

λT(s)Q1λ(s) +

k−1 s=k−τ

m

λT(s)Q2λ(s)

+

k−1 s=k−τ

M

λT(s)Q3λ(s) +

k−1 s=k−σ

λT(s)Q5λ(s)

+

k−1 s=k−τ(k)

fT(x(s))Q4f(x(s)),

V3(k) =

−τm j=−τ

M+1

k−1 s=k+j

λT(s)Q1λ(s)

+

−τm j=−τ

M+1

k−1 s=k+j

fT(x(s))Q4f(x(s)),

V4(k) =τm

−1 j=−τ

m

k−1 s=k+

j

xT(s)R1x(s)

+

−1 j=−τ

M

k−1 s=k+j

xT(s)R2x(s) +

−τm j=−τ

M+1

k−1 s=k+j

xT(s)R3x(s)

+

−1 j=−σ

k−1 s=k+

j

xT(s)R4x(s),

V5(k) =

−τm−1 j=−τ

M

k−1 s=k+j

ηT(s)Z1η(s)

+

−1 j=−τ

M

k−1 s=k+j

ηT(s)Z2η(s) +

−1 j=−σ

k−1 s=k+j

ηT(s)Z3η(s), and λ(k) = [xT(k) ηT(k)]T, η(k) = x(k+1)− x(k), ξ1(k) = [xT(k) xT(k−σ) xT(k−τm) xT(k− τM)]T.Calculating the difference ofVr(k)by defining

∆Vr(k) =Vr(k+1)−Vr(k)along the trajectory of (1), we get

∆V1(k) =ξ1T(k+1)

N j=1

πi jPjξ1(k+1)−ξ1T(k)Piξ1(k)

=

x(k) +η(k)

x(k−σ) +η(k−σ) x(k−τm) +η(k−τm) x(k−τM) +η(k−τM)

T

(5)

74 M. Kalidass et al.·Robust Stochastic Stability of Discrete-Time Markovian Jump Neural Networks

·πP¯ j

x(k) +η(k)

x(k−σ) +η(k−σ) x(k−τm) +η(k−τm) x(k−τM) +η(k−τM)

x(k) x(k−σ) x(k−τm) x(k−τM)

T

Pi

x(k) x(k−σ) x(k−τm) x(k−τM)

, (5)

∆V2(k)≤λT(k)(Q1+Q2+Q3+Q5)λ(k)

−λT(k−τm)Q1λ(k−τm)

−λT(k−τ(k))Q2λ(k−τ(k))

−λT(k−τM)Q3λ(k−τM)−λT(k−σ)Q5λ(k−σ) +

k−τm s=k+1−τ

M

λT(s)Q2λ(s) +fT(x(k))Q4f(x(k))

fT(x(k−τ(k)))Q4f(x(k−τ(k))) +

k−τm s=k+1−τ

M

fT(x(s))Q4f(x(s))

xT(k)(Q11+Q21+Q31+Q51)x(k) +2xT(k)(Q12+Q22+Q32+Q52)η(k) +ηT(k)(Q13+Q23+Q33+Q53)η(k)

xT(k−τm)Q11x(k−τm)−2xT(k−τm)Q12η(k−τm)

−ηT(k−τm)Q13η(k−τm)

xT(k−τ(k))Q21x(k−τ(k))

−2xT(k−τ(k))Q22η(k−τ(k))

−ηT(k−τ(k))Q23η(k−τ(k))

xT(k−τM)Q31x(k−τM)

−2xT(k−τM)Q32η(k−τM)

−ηT(k−τM)Q33η(k−τM)−xT(k−σ)Q51x(k−σ)

−2xT(k−σ)Q52η(k−σ)−ηT(k−σ)Q53η(k−σ) +

k−τm s=k+1−τ

M

λT(s)Q2λ(s) +fT(x(k))Q4f(x(k))

fT(x(k−τ(k)))Q4f(x(k−τ(k))) +

k−τm s=k+1−τ

M

fT(x(s))Q4f(x(s)), (6)

∆V3(k) =τ2λT(k)Q2λ(k) +τ2fT(x(k))Q4f(x(k))

k−τm s=k+1−τ

M

λT(s)Q2λ(s)−

k−τm s=k+1−τ

M

fT(x(s))Q4f(x(s))

2(xT(k)Q21x(k) +2xT(k)Q22η(k)

T(k)Q23η(k) +fT(x(k))Q4f(x(k)))

k−τm s=k+1−τ

M

λT(s)Q2λ(s)

k−τm s=k+1−τ

M

fT(x(s))Q4f(x(s)), (7)

∆V4(k) =xT(k) τm2R1MR22R3+σR4 x(k)

−τm

k−1 s=k−τ

m

xT(s)R1x(s)

k−1 s=k−τ

M

xT(s)R2x(s)

k−τm s=k+1−τ

M

xT(s)R3x(s)

k−1 s=k−σ

xT(s)R4x(s).

By using Lemma1, we get

∆V4(k)≤xT(k) τm2R1MR22R3+σR4 x(k)

− ( k−1

s=k−τ

m

xT(s)R1

k−1 s=k−τ

m

x(s) )

− 1 τM

( k−1

s=k−τ

M

xT(s)R2

k−1 s=k−τ

M

x(s) )

− 1 τ2

( k−τ

m

s=k+1−τ

M

xT(s)R3

k−τm s=k+1−τ

M

x(s) )

−1 σ

( k−1

s=k−σ

xT(s)R4

k−1 s=k−σ

x(s) )

, (8)

∆V5(k) =ηT(k) τ2Z1MZ2Z3 η(k)

k−τm−1 s=k+1−τ

M

ηT(s)Z1η(s)−

k−1 s=k−τ

M

ηT(s)Z2η(s)

k−1 s=k−σ

ηT(s)Z3η(s),

k−1 s=k−σ

ηT(s)Z3η(s)≤ −1 σ

k−1 s=k−σ

ηT(s)Z3

k−1 s=k−σ

η(s)

≤ −1 σ h

x(k)x(k−σ)iT

Z3

h

x(k)x(k−σ)i

. (9)

On the other hand, for any matricesMi, i=1,2,3,4, we have the following [32]:

T(k)M1

·

"

x(k)x(k−τ(k))−

k−1

s=k−τ(k)

x(s)

#

=0, (10)

(6)

T(k)M2

·

"

x(k−τ(k))−x(k−τM)−

k−τ(k)−1 s=k−τ

M

x(s)

#

=0, (11) 2ξT(k)M3

·

"

x(k−τm)−x(k−τ(k))−

k−τm−1

s=k−τ(k)

x(s)

#

=0, (12)

T(k)M4

" k−τ

m

s=k+1−τ

M

x(s) +

k−1 s=k−τ

m

x(s)

k−1 s=k−τ

M

x(s)x(k−τm) +x(k−τM)

#

=0,

(13)

τMξT(k)M1Z1−1M1Tξ(k)

k−1

s=k−τ(k)

ξT(s)M1Z−11 M1Tξ(s) =0, (14) τ2ξT(k)M2(Z1+Z2)−1M2Tξ(k)

k−τ(k)−1 s=k−τ

M

ξT(s)M2(Z1+Z2)−1M2Tξ(s) =0, (15) τ2ξT(k)M3Z2−1M3Tξ(k)

k−τm−1

s=k−τ(k)

ξT(s)M3Z−12 M3Tξ(s) =0. (16) In addition, we have,η(k) =x(k+1)−x(k) =Aix(k− σ) +Bif(x(k)) +Cif(x(k−τ(k)))−x(k).Then for any matrixM5,we have

ζT(k)M5

Aix(k−σ)−x(k) +Bif(x(k)) +Cif(x(k−τ(k)))−η(k)

=0, (17)

where ζ(k) =

"

xT(k)xT(k−σ)xT(k−τm)xT(k−τ(k))

·xT(k−τMT(k)ηT(k−σ)ηT(k−τm)

·ηT(k−τ(k))ηT(k−τM)fT(x(k)) fT(x(k−τ(k))

·

k−1 s=k−τ

m

xT(s)

k−1 s=k−τ

M

xT(s)

k−τm s=k+1−τ

M

xT(s)

k−1 s=k−σ

x(s)iT

.

From Assumption1, for anys=1,2, . . . ,n,we have (fs(xs(k))−Fsxs(k))(fs(xs(k))−Fs+xs(k))≤0, which is equivalent to

α(k)T

"

FsFs+eseTsFs+F2 s+eseTs

Fs+F2 s+eseTs eseTs

#

α(k)≤0,

whereαT(k) =

xT(k) fT(x(k))

, and es denotes the unit column vector having element 1 on its sth row and zeros elsewhere. LetD1=diag{d11,d12, . . . ,d1n}. Then

n s=1

d1sαT(k)

·

"

FsFs+eseTsFs+F2 s+eseTs

Fs+F2 s+eseTs eseTs

#

α(k)≤0, i.e., αT(k)

F1D1 −F2D1

−F2D1 D1

α(k)≤0.

(18)

Similarly forα1T(k) =

xT(k−τ(k))fT(x(k−τ(k))) , D2=diag{d21,d22, . . . ,d2n}, one can gets

α1T(k)

F1D2 −F2D2

−F2D2 D2

α1(k)≤0. (19) Combining (4) – (19) and taking mathematical expec- tation on both sides, we get

E[∆V(k)]≤E

"

ζT(k) (

Π+τMM1Z1−1M1T2M2(Z1+Z2)−1M2T2M3Z2−1M3T

k−1

s=k−τ(k)

M1Tξ(k) +Z1η(s)T

Z1−1

M1Tξ(k) +Z1η(s)

k−τ(k)−1 s=k−τ

M

M2Tξ(k) + (Z1+Z2)η(s)T

(Z1+Z2)−1

·

M2Tξ(k) + (Z1+Z2)η(s)

k−τm−1

s=k−τ(k)

M3Tξ(k) +Z2η(s)T

Z2−1

·

M3Tξ(k) +Z2η(s) )

ζ(k)

#

. (20)

Letλ0=min{Ψ,iL}, thenλ0>0 due toΨ. Finally from (20), we obtain that for anyk≥0

E[V(Xk+1,rk+1=j)/xk,rk=i]

V(xk,rk=i)−λ0xT(k)x(k). (21) Settingk=0 andk=1 in (21) yields

E[V(X1,r1)/x0,r0]≤V(x0,r0)−λ0xT(0)x(0) (22) and

E[V(X2,r2/x1,r1)]≤V(x1,r1)

−λ0xT(1)x(1). (23)

(7)

76 M. Kalidass et al.·Robust Stochastic Stability of Discrete-Time Markovian Jump Neural Networks Taking expectation E[·/X0,r0] on both sides of (23),

with the aid of (22), leads to E[V(X2,r2/x0,r0]≤V(x0,r0)

−λ0

1 l=0

E[xT(l)x(l)/x0,r0].

Then one can continue the iterative procedure (21) to obtain

E[V(XT+1,rT+1/x0,r0]≤V(x0,r0)

−λ0

T l=0

E[xT(l)x(l)/x0,r0] which implies that

T l=0

E[xT(l)x(l)/x0,r0]≤ 1

λ0V(x0,r0)<∞.

Hence the MJDNN (1) without uncertainties is stoch- astically stable in the sense of Definition1, and thus the proof is completed.

In the next theorem, we consider the robust stochas- tic stability of uncertain MJDNNs that exhibit paramet- ric uncertainty. In particular, we derive a sufficient con- dition in terms of LMIs for the robust stochastic stabil- ity of uncertain MJDNN (1) with time-varying delays by extending the results of the previous theorem.

Theorem 2. Under the Assumption1, the MJDNN (1) is robustly stochastically stable, if there exist matrices Pi>0, Qp>0, p=1,2,3,5; Q4>0, Ru>0, u= 1,2,3,4; Zv>0, v=1,2,3; and diagonal matrices Dq>0, q=1,2,and Ml, l=1,2,3,4,5,with appro- priate dimensions and positive scalarsεisuch that the following LMIs hold for iS:

Π Θ1T εiΘ2T

∗ −εiI 0

∗ ∗ −εiI

<0 (24) Θ1=

EiTM51T EiTM52T 0EiTM53T 0EiTM54T 0n,10n , Θ2= [0N1i0n,8nN2iN3i0n,4n], and other parameters are defined as in Theorem1.

Proof. By replacing the matricesAi, Bi, andCi with Ai+EiFi(t)N1i, Bi+EiFi(t)N2i, and Ci+EiFi(t)N3i in Theorem1 and applying Schur complement and Lemma2, we can obtain (24).

Next, we give a corollary to show that our main re- sults cover some cases that have been studied in the literature. In the absence of leakage delay the uncer- tain discrete-time Markovian jumping neural network (1) becomes a standard uncertain MJDNN as follows:

x(k+1) = (Ai+∆Ai)x(k) + (Bi+∆Bi)f(x(k)) + (Ci+∆Ci)f(x(k−τ(k))). (25) In the following corollary, we will deduce the robust stability results to the model (25) from Theorem1and 2.

Corollary 1. Under the Assumption1, the MJDNN (25) is robustly stochastically stable if there exist ma- trices Pi>0;Qp>0,Q4>0,Ru>0,u=1,2,3;Zv>

0, v=1,2,and diagonal matrices Dq>0, q=1,2, and Ml, l =1,2,3,4,5, with appropriate dimensions and positive scalars εi such that the following LMIs hold for all i,j∈S, i6= j:

 Θ13,13

τMM1

τ2M2

τ2M3 Θ1T εiΘ2T

∗ −Z1 0 0 0 0

∗ ∗ −Z1Z2 0 0 0

∗ ∗ ∗ −Z2 0 0

∗ ∗ ∗ ∗ −εi 0

∗ ∗ ∗ ∗ ∗ −εi

<0, (26)

where Θ1 =

EiTM51T 0EiTM52T 0EiTM53T 0n,8n , Θ2= [N1i0n,7nN2iN3i0n,3n],

Pi=

P11i P12i P13i

P22i P23i

∗ ∗ P33i

, Qp=

Qp1 Qp2

Qp3

,

u=1,2,3, π¯=

N

j=1

πi j,

Θ1,1=π¯P11jP11i+Q11+ (τ2+1)Q21+Q31m2R1

MR22R3F1D1+2M51(Ai−I) +2M11, Θ1,2=π¯P12jP12i+M31M41, Θ1,3=M41M11

+M12T +M21M31+M52T(AiI)T, Θ1,4=π¯P13jP13i−M21,

Θ1,5=π¯P11j+Q12+ (τ2+1)Q22+Q32−M51 +M52T(AiI)T+M13T,

Θ1,6=π¯P12j, Θ1,8=π¯P13j1,9=F2D1+M51Bi, Θ1,10=M51Ci, Θ1,11=M41, Θ1,12=−M41, Θ1,13=M41, Θ2,2=π¯P22j−P22i−Q11,

Referenzen

ÄHNLICHE DOKUMENTE

The filter for the equivalent non-lagged problem has now become a filter- smoother (an estimator of present and past values) for the original lagged problem.. It remains to

(a) Implementiren Sie eine Funktion in R, die den Preis einer Europäischen Call Option bes- timmt, wobei die Funktion die Parameter S = Preis zum Zeitpunkt 0 , K = Strike, T =

Bestimmen Sie die Formel für den Arbitrage freien Preis einer europäischen Call-Option K = 1.. (d) Nutzen Sie das CRR Modell um den Preis einer europäischen Call-Option im

Wir vergröÿern nun die Filtration durch &#34;insider&#34; Informationen, nämlich den Wert von X zum Zeitpunkt T.. (a) Zeigen Sie, dass X kein P-Martingal

The results are applied to a sequence of discrete time market models given by an objective probability measure, describing the stochastic dynamics of the state of the market, and

existene of optimal strategies for those problems in the lass of relaxed ontrols..

Key words: Asymptotic normality, consistency, discrete time observation of continu- ous time models, prediction-based estimating functions, pseudo-likelihood, stochastic

• Weight Sharing: Initialize corresponding weights with equal values and sum up all delivered by the network learning rule. • Weight Coupling: Start with different values and let