deposit_hagen
Publikationsserver der Universitätsbibliothek
Mathematik und
Informatik
Informatik-Berichte 53 – 02/1985
The time-specific failure frequency of
repairable systems with stochastically
independent components
w.
Schneeweiss, HagenKey words : Reliability, repair, failure frequency Abstract:
It is s.hown how the time-specific failure frequency of a repairable system can be calculated from the reliability structure function
(or fault tree) of the system and the renewal densities of the
alternating renewal processes of the components. (It is discussed why results are restriced to the case of stochastically independent
components.)
Contents :
1 Introduction
2 The role of the autocorrelation function of the system state indicator process
3 A brute force approach to the determination of the time-specific failure frequency of a repairable system with independent
components
4
A more elegant way to determine the time-specific failure frequency5 Modifying the unate multilinear polynomial to get the system failure frequency
6 Difficulties with the extension of results for the stationary case if the fault tree function is not unate
7 Concluding remarks
8 Appendix: Renewal function and availability of a repairable component
9 References
1 INTRODUCTION
For many good reasons the determination of the mean point density of a random point process [1] is highly desirable. In the theory of renewal processes1> [2] this quantity is the well-known renewal density1 >. In reliability theory i t has been called by Singh [3]
"time-specific failure frequency (of a repairable system)". For the point process of failures and (finished) repairs of component i of an n component system S we denote the time-specific failure frequency by the symbol vi(t); for the system,by v
8 (t). With
X s
=
a Boolean redundancy-structure function, typically the fault tree function, where X. is the indicator variable of cornponent i, we are
l
looking for a function
If each of the n components of the system changes its two states
2) 3) )
("good" and "bad") according to an alternating renewal process1 ,
then v. (t) is the renewal density of the renewal process defined by
l
consecutive pairs of life-time (L) and down-time (D) intervals.
Consequently, in that case v. (t) is determined by the probability
l
density functions (pdf's) of Land 01>.
1) See the appendix for a short discussion.
2) synonym.s are alive, up, working 3) synonym.s are: dead, down, failed
2 THE ROLE OF THE AUTOCORRELATION FUNCTION OF THE SYSTEM STATE INDICATOR PROCESS
Westart with a plausible lemma.
Lemma 2-1 : The mean point density of an orderly [4] alternating point process i (PPi) with the indicator process given by
~ O, for state 11011 { Xi(t)
= l
11 for state11111
f
of point process iis the temporal joint probability density v. (t)
=
lim1.
(See also [8].) Proof
With
N. ( t1 , t2)
l :=
we have, with P{[Xi(t)
= o]
{Number of changes of X.(.) from
l
in the time interval from t1 to p{.} for probability,
n
[X. (t+tit)=
1]}=
P{Ni(t,t+tit)l
O to t2}
is 1
odd}.
Now, for any orderly PPi [4], with E{.} for expected value, P{Ni(t,t+t.t} is odd}
=
P{Ni(t,t+tit)=
1} + o(tit)CO
= I
j=1
j P{N. (t,t+tit)
=
j} + o(tit)l
=: E{N. (t,t+t.t)} + o(t.t).
l
Clearly, as a mean point density lim ti~ E{Ni(t,t+tit)}
=
vi(t} • tit-+o( 2-1)
(2-2)
(2-3)
(2-4)
(2-5)
Hence, dividing (2-4) and (2-3) by 6t and letting 6t+o one has (2-1), q.e.d ..
Now i t is easy to prove the following theorem.
Theorem 2-1 : Using the notation of lemma 2-1
". ( t) ]. - lim
"!
E[X. (t)X; (t+6t)] =:C.
(t),ßt+O u J. J. J.
(2-6)
where Ci(t) is,by usual definition [5],the time derivative of the autocorrelation function of the (stochastic) indicator process
{X. (t) ;t~o} for zero shift.
].
Proof : Trivially, with
X:= 1-X for XE{0,1}: (2-7)
Now the (random) event
{ [Xi (t) =O]
n.
[Xi (t+6t) = 1]} = {X. (t)X. (t+6t) =]. ]. 1 } • (2-8)
Further, since
E(X) := 0 • P{X=O} + 1 • P{X=1} = P{X=1} (2-9) for XE{0,1}, one has
P{X.(t)X.(t+t.t) = 1} = E(X.(t)X.(t+lt)].
]. ]. ]. ]. (2-10)
In conjunction with (2-1) this yields (2-6), q.e.d ••
Cornrnents :
1) The autocorrelation function of the binary process {X(t),t~O} is defined as [5]
C(t1 ,t2) := E[X(t1 )X(t2)], so that
(2-11)
E[X(t)X(t+i\t)] = C(t,t+llt).
Hence the above notation C(t) := lim
6t---o 1
ßt C(t,t+i\t) appears tobe plausible.
(2-11a)
(2-12)
2) Sometimes, instead cf X(t)X(t+lt), initially X(t)X(t+i\t) is given. By (see (2-7))
X(t)X(t+i\t) = [1- X(t) ]X(t+i\t)
=
X(t+i\t) - X(t)X(t+i\t) (2-13) the desired product X(t)X(t+6t) is easily gained.3) In fault tree analysis one usually defines (for integers o and 1)
{
O , i f the unit under consideration is good X(t) =
1 , if i t is bad, (2-14)
with indices i and S for component i and system respectively.
3 A BRUTE FORCE APPROACH TO THE DETERMINATION OF THE TIME-SPECIFIC FAILURE FREQUENCY OF A REPAIRABLE SYSTEM WITH INDEPENDENT
COMPONENTS
Now let us apply the results of §2 to system failure frequency ana- lysis. In essence, we are going to determine the time-specific fai- lure frequency as (see (2-6))
vs(t)
=
lim~!
E[Xs(t)Xs(t+~t)] •~t-+O
(3-1)
In the subsequent analysis i t will be slightly more practical to write
v(t)~t+o(~t) = E[X(t)X(t+lt)] , (3-2)
with index S for system and i for component i. If both x
8
= ~(~)
and X =lD(X)
S T - are multilinear polynomials of the literals
- -
x2 , • • • ,X ,X, the notation of (3-2) will show very efficiently which
n n
products of terms of
ff
viz. Cf will vanish wi th ~t-+o and which will not. This is explained better with an example.For better reading, we use henceforth the notation (see (2-14)) A(t) = P{X(t)=1}
=
E[X(t)] (availability),U ( t ) = P { X ( t ) = 1}
=
E. [ X ( t) ]= 1 - A ( t) (unavailability).
( 3-3)
(3-4)
EXAMPLE 3-1 : 1-0UT-OF-2 SYSTEM
In the simplest case of redundancy, the fault tree function is
and (as is easily checked)
- - -
XS = X1 + X 2 - X1X2 •
Hence,
xs(t)Xs(t+t.t)
=
[X1 (t)X1 (t+t.t) JX2 (t+t.t) + [X2 (t) X2 (t+t.t) ]X1 (t+t.t)- [X1 (t)X1 (t+t.t)] [X2 (t)X2 (t+t.t)] I and by (3-2) and (3-3,4), if
E[Xi(t)X. (t+t.t)X. (t+t.t)]
=
E[X. (t)X. (t+t.t) ]E[X. (t+t.t)],1 J 1 1 J
(3-5)
(3-6)
( 3-7)
which is typically true if {X1 (t)} and {X 2 (t)} are stochastically independent binary random processes,
- V1 (t) t.t V2 (t) t.t r
so that, after dividing by t.t and then letting t.t~o,
(3-8) This result is plausible, since, for U1 (t) ,U2 (t)<<1 the system failure frequency is much smaller than the failure frequency of any of its components.
Now we look at a further,less trivial,example.
EXAMPLE 3-2 : 2-0UT-OF-3 SYSTEM
As is well known [6], for a 2-out-of-3 system
and
Here the product of the first terms yields by (3-2) (as part of E [X8 (t) XS (t+At)])
= [v 1 (t)At+o(At)] [v 2 (t)At+o(At)],
(3-9)
(3-10)
(3-11) which, after the division by At as prescribed in (3-1), vanishes with At. Consequently, using (3-2) and gathering 6 individual o(At) to the one appearing at the
l.h.s.
of the following equation, for stochastically independent Xi(t)v8(t)At+o(At)=v1 (t)AtE[~2(t)X3(t+At) ]+v2(t)AtE[X1 (t)i3(t+At)]
+v 1 (t)AtE[X3(t)X2 (t+At) ]+v3(t)AtE[X1 (t)X2 (t+At)]
+v 2 (t)AtE[X3 (t)X1 (t+At) ]+v3 (t)AtE[X2 (t)X1 (t+At)].. (3-12) Division by At gives for At~o and stochastically independent Xi viz. X. (using the notation of ( 3-3, 4) )
].
+ V 3 ( t) [ A 1 ( t) U 2 ( t) + A2 ( t) U 1 ( t) ] • (3-13)
New let us discuss briefly how a polynomial form of
f
can be found.If Cf is a (multilinear) polynomial
C +
0 l. kE { 1 , ••• , n} , l. k =/ l . k , ( 3-1 4)
1., 1., 1 1., 2
c0 ,c1 , • • • integers, then, trivially, the desired polynomial is
E.g. in the case of the 2-out-of-3 systern, instead of (3-10)
-
X8 = 1 - X1X2 - X1X3 - X2X3 + 2 X1X2X3.
In E[X
8 (t)X
8 (t+6t)] this would initially lead to terms like E [ X . ( t) X . ( t + 6 t) X . ( t) Xk ( t + 6 t) ).
l. l. J
(3-15)
(3-16)
Here i t would be necessary to replace X. (t)X. (t+6t)
l. l. according to (2-11) :
X. (t)X. (t+6t)
= [
1-X.
(t)] X. (t+llt)l. l. l. l.
-- X. (t+6t) - X. (t) X. (t+6t) .
l. l. l. (3-17)
To show that this little trick works, let us do a simple exarnple.
EXAMPLE 3-3: 1-0UT-OF-2 SYSTEM.(EX. 3-1 CONTD.)
In the case of a 1-out-of-2 systern (1 stand-by cornponent) fro:::!l ( 3-5) , in contrast to (3-6)
(3-18) New, by (3-17)
x
5 (t)x8 (t+6t)=
X1 (t+6t)X2 (t+6t)- [X1 (t+6t) -
X1
(t) X1 (t+6t)] [X2 (t+6t) - X2 (t) X2 (t+6t)]( 3-19)
which equals (3-7), such that again the v
5(t) of (3-8) will result.
For completeness i t should be mentioned that, as an alternative to (3-15), there exists a well-known Boolean method to determine
p -
fromff',
whereff
is of disjunctive normal form (DNF) . Typically m n.l
'f (~')
= V/\
X (3-20)l. k i=1 k=1 1 ,
Then by de Morgan's law
m ni
'f (~) =
/\ Vi=1 k=1
X l. 1 , k (3-21)
However, for further processing the r.h.s. must be transformed to a DNF and this to a multilinear polynomial. Hence (3-15), backed up by
(3-17), looks much more practical.
4 A MORE ELEGANT WAY TO DETERMINE THE TIME-SPECIFIC FAILURE FREQUENCY
As is obvious from the examples of §3, the above "brute force"
approach generally leads to a formidable amount of algebraical mani- pulations, even for small-size problems. The following theorem, in principle known from [9],shows a way out.
Theorem 4-1 : The time-specific failure frequency of the system S
consisting of n stochastically independent components is
where
with
n
= l
vi (t)Pi (t), i=1f
~ (t)= a ax.
].1 )
( 4-1)
(4-2)
(4-3)
the formal partial derivative of the Boolean fault tree function, i. e.
'f {
is defined by the fault tree function'f ,
given as(4-4) where Cf) '. and (b '.' don' t depend on X. •
; i r i i
2)
Proof : By the law of total probability, with N(t1, t2 ) from (2-2),
P{N8 (t,t+6.t)
=
1}= I
n P{N8 (t,t+6.t)=
11Ni(t,t+6.t)=
1}i=1
• P{Ni (t,t+6.t)
=
1}, (4-5) where i t is assumed that at most one component can fail in (t,t+6.t).1) Clearly, Boolean functions are not differentiable in the usual sense.
2) A basically similar approach can be found in§ 3.4 of [10].
From (2-3,4) we have (for indices Sand i)
P{N(t,t+Lit) = 1} = v(t)t.t + o(t.t). (4-6)
Further, the conditional probability in (4-5) is obviously the proba- bility that, given a change of state of component i , the system will behave in the same way (between t and t+t.t) ~ As can be concluded from
(4-4) this is equivalent to having
f 1
=1 (between t and t+t.t); in shortlim t.t---o
P{NS (t,t+t.t)=1
I
Ni (t,t+t.t) =1 }=P{Cf {
(t)=1 }=:Pi (t). (4-7)Hence, inserting (4-6) and (4-7) in (4-5), dividing by fit and letting t,t .... o yields (4-1), q.e.d ..
EXAMPLE 3-2:(EX. 3-2 CONTD.)
For Cf as gi ven by ( 3-9)
(f~ = X2 + X:3
-
2 X2X3 = X2X3(f2
'
= X1 + X:3-
2 X1X3 = X1X3'f 3 = X1 + X2
-
2 X1X2 = X1X2+ + + Since
Cf
1'f2
and </3 are BooleanP. 1 := p {
'f
i =;1 } = E (ff
i) ispecifically, using (3-3,4),
as in (3-13).
X2X3 X1X3 X1X2 1 )
'
by' '
(2-9)
(4-8) (4-9) (4-10)
( 4-11)
(4-12) (4-13) (4-14)
5 MODIFYING THE UNATE MULTILINEAR POLYNOMIAL TO GET THE SYSTEM FAILURE FREQUENCY
As is shown e.g. in [6], any Boolean function of indicator variables can be transformed to a multilinear form without negated (complemen- ted) variables (hence called "unate") as given by (3-14) and exempli- fied by (3-9). Now, what would happen to (3-14) in view of theorem 4-1 ? Obviously each term
=
u.u.uk1. J
X. X. Xk . . .
1. J would result in
as part of the final v
8 • Let's sum this up as follows :
( 5-1)
Theorem 5-1 : Fora system of n stochastically state-independent com- ponents with the fault tree function given as the multilinear poly- nomial
m
xs
=
C +l
o i=1 (c. 1.
n. 1.
II X,e_ ) ; k=1 i,k
l. kE { 1 , ••• , n}
1. , (5-2)
the time-specific failure frequency, i.e. the mean (temporal) density of failures is
m
= I
i=1 [ c. ( : i U0 _(t)) /.,ni
(v
0 _ /Uo_(t))] • 1. k=1 -<-1.,k k=1 -<-1.,k -<-1.,k(5-3)
EXAMPLE 5-1 : 2-0UT-OF-3 SYSTEM
From XS of (3-9) we get immediately (by (5-3))
+ U 1 ( t) U 3 ( t) [ V 1 ( t) /U 1 ( t) + V 2 ( t) /U2 ( t) ]
(5-4) This checks with (3-13) if A
=
1-U.i J. is taken into account.
6 DIFFICULTIES WITH THE EXTENSION OF RESULTS FOR THE STATIONARY CASE IF THE FAULT TREE FUNCTION IS NOT UNATE
In §§4 and 5 i t has been shown that well-known results [9] for the stationary case are easily extensible to the time-variable (time- specific) case. The only exception was that in (5-3) v . ( t) /U . ( t)
J J
could not be replaced by µ. ( t)
J unless one defines
µ . ( t) : = v . ( t) /U . ( t) • ( 6 -1 )
J J J
In [7] I could find a much more elegant result because, in the stationary case
v
=
Uµ=
At... ,where
µ
=
1 /E (D)t...
=
1 /E (L)(repair rate), (failure rate).
(6-2)
In short, i t was shown in [7] that,for independent components,from
( Ci n.
Xl.
k )m l
~ -
xs
= C +I
IT ; X. E {X. ,X.}0 i=1 k=1 1 , J J J
( 6-3)
there follows
(
n. Ü.e. k ) n.m
[ Ci
l l
I I ~ ]
vs = IT µ
i=1 k=1 1 , k=1 .e.i,k
(6-4a)
with
~
~ ~
{
u.,
J if X.=X. J J~ f
µ. J , if Xj=Xju. = i µ.
J J
x.=x.
A., if X.=X -"t...., if
J J J J J (6-4b)
Such a result is, in general, not possible here because, usually
(6-2) does not hold true1>. This can be verified as follows : Usually the point processes of the components will be alternating renewal processes2 >. For these, as shown in the appendix by (A-9) the Laplace transform of the availability is (for component i)
1 - A.
*
(s)=
l
fL. (s)
*
l
s ( 1- fL. (s) fD. (s)]
* *
(6-5)
l l
By U
=
1 - A this yields* *
f,L. ( s) [ 1 - fD. ( s) ]
*
1*
l lU. (s) = - - A. (s) = - - - - 1 S l s [ 1- fL. (s)fD. (s)
* * ]
0( 6-6)
l l
By general definition of the pdf of a (non-negative) random variable (see (A-13) in the appendix)
*
fL. (s)
=
1 - s E(L.) + o(s)l l
and
fD. (s)
* =
1 - s E(Di) + o(s).l
Hence, from (A-17)
lim Ai(t)/E(L1)= Ai(00)~i = t-+oo
and
1 /E (L. +D.)
l l
lim U. ( t) /E ( D. )
=
U. ( 00) µ 1=
1 /E ( L. +D. ) •l l l l l
t-+=
1) This was already pointed out in [3].
2) Not necessarily of the Poisson type,as is assumed in [3].
( 6-7)
( 6-8)
( 6-9)
(6-10)
However, for finite t , in general,
A. (t)/1.. =f U. (t}µ.. (6-11)
]. ]. ]. ].
Furthermore, for the alternating renewal process of component i (see
(A-9))
f L. (
*
s)*
v. ]. (s)
=
].* *
1- fL. (s)fD. (s)
]. ].
(6-12)
SO· that, generally, at least one of the following inequalities holds:
v. (t) =f A. (t)fl., ,
]. ]. ]. (6-13a)
\), (t) =f
u.
(t)µ . •]. ]. ]. (6-13b)
EXAMPLE 6-1 A COMPONENT WITH EXPONENTIALLY DISTRIBUTED LIFE AND RECTANGULARLY DISTRIBUTED DOWNTIME
As an example for (6-11) and (6-13) let
sothat
and let
= j
0
c , t E[0,1/c)
l ,
else ,sothat
1
/c
f;(s)
=
c J exp(-st)dt=
~ [1-exp(- ~) ].0
(6-14)
(6-15)
(6-16)
(6-17)
Then, inserting in (6-5), (6-6) and (6-12) yields with the abbreviation
*
A C sdtS) := (S+A)S{1- -S+A · -S [1-exp(- -) ]} C
s s2
=
s(s+A) - AC [ C -2c?"
+ - . . . ]=
s2 [1+ 2A c + O(s)]A*(s)
=
s/d(s),*
u* ( s)
=
A { 1 - ~ [ 1 -exp ( - ~) ]} / a*< s)and
= [ .l
AS+ o(s) J/a*(s) 2 Cv*(s)
=
As/d*(s).(6-18) (6-19)
(6-20)
(6-21)
To complete the check of (6-11) and (6-13) we need (from (6-16))
1 1
- =
µ E(D)= -
2cHence
U (s) * µ
=
2 C=
2 C{ 1- C A
-
sA { 1-
-
C ss ) ]} ;a1s) [1-exp(- -
C
s 1 52 *
[ 2 ? +
-
. . . ]}/d(s)C
=
[AS+ o(s) ]/d(s);*
o(s) =/- O .Now, from (6-19) and the 'last line of (6-23) i t is obvious
(6-22)
(6-23)
that here (6-11) is true. Frorn (6-19) and (6-21) i t can be seen that (6-13a) does not hold in this case. Frorn (6--21) a,nd the last line of (6-23) i t is obvious that here (6-13b) is true.
Clearly, frorn (A-13) - as a little check -
A (CO) lirn 1 / [ 1 + 71.
+ O (s) ] µ/ (71.+µ)' (6-24)
=
2c=
S->-0
u (CO) lirn 71.
+ O(s) ]/[1+ 71. + O(s)] 71./ (71.+µ) (6-25)
=
2c 2c=
s ... o and
\) (CO)
=
lirn 71./ [ 1 + 2c 71. + O(s)]=
71.µ/ (71.+µ). (6-26)S->-0
7 CONCLUDING REMARKS
The main purpose of this investigation was to demonstrate, where
the limits of the extensibility of results for the stationary case to the general case lie. It could be shown in §§4 and 5 that the results in [9) can be used if (6-1) or the corresponding definition for Ai(t), namely
Aj (t) := vj (t) / Aj (t) ( 7-1)
is used. In §6 i t was shown that, in general, the elegant formulas (6-3,4a,4b) of [7] are not applicable in the non-stationary case.
Much more important are the results of §3 because they show which type of information would be needed to handle the general case of non-independent components. In fact, higher moments of the X. (t)
l
and the X. (t+tit)
J must be known. (The intermediate step indicated by (3-12) of a partial factorization of such higher moments doesn't appear tobe very realistic.)
_,
8 APPENDIX :
RENEWAL FUNCTION AND AVAILABILITY OF A REPkIRABLE COMPONENT
An (ordinary) renewal process (RP) [2] is a stochastic point process where the lengths of all the intervals between neighbouring points are random variables which
- are all stochastically independent (of each other) - have the same probability distribution with the pdf f1
The renewal function H(t) is by definition simply the expected number of points (renewals) between O and t :
H(t) := E[N(O,t)]. (A-1)
Next we derive H(t) in terms of the pdf f 1 of point distance. To this end we need the following lemma.
Lemma A-1 : For any örderly [4] point process
P{N(O,t) = k} = Fk(t)-Fk+1 (t); k;;;; O, F0 (t)=1, (A-2) where F. is the (cumulative) probability distribution function (cdf)
J
of the distance of the j -th random point T.
J from the time origin, where - in the case of an ordinary renewal process -the 0-th renewal point is located.
Proof : By definition of the cdf Fk(t) := P{Tk ~ t} = P{N(O,t) ;;;; k}
= P{N(O,t) = k} + P{N(O,t) ;;;; k+1}1>
= P{N(O,t) = k} + Fk+1 (t),
q.e.d .. (The case k=O is obvious from the definition of F 1 (t} .) Note : Renewal processes are orderly [4].
1) Notice that the random events {N(O,t)=k} and {N(o,t)~+1} are disjoint.
Now, by the general definition of an expected value (discrete case)
00
H(t)
= I
k P{N(O,t)=k}k=1
=
F1 (t) - F2 (t) + 2F 2 (t) - 2F3 (t) + 3F 3 (t) - 3F4 (t)+- ...00
(A-3)
Since termwise differentiation is allowed here, the renewal density is
00
h(t)
= I
fk{t) .k=1
(A-4)
Since Tk is the sum of k stochastically independent random vari- able each with the pdf f 1 , using the convolution theorem of the Laplace transform (notation of [2])
*
k= [ f1 (s)]
Hence, the Laplace transformed renewal density is
00
h*(s)
= I
k=1
k
[ f ~ ( s) ]
=
f ~ ( s) / [ 1 - f ~ ( s) ] •(A-5)
(A-6)
(By the definition of the Laplace transform [see (A-14)] and because of
00
J f1 (t)dt
=
1,0
we are sure that [ f
*
1 (s) i<
1 • )If the above ordinary renewal process is an alternating one with alternating interval lengths L (for component life) and D (for component down time) then for the renewal process of the restarts
(A-7)
where use was made of the well-known convolution theorem yielding
(A-8)
(A-7) refers to the (ordinary) renewal process of restarts. When looking at the moments of failure, one no langer has an
ordinary renewal process but rather a special modified one [1].
Specifically, the time to the first "renewal" is Land the renewal distance is (as before) L+D. Hence, for failures as "renewals'' (A-5) has tobe changed to
f~(s) k
=
1k ;:;; 2 . (A-9)
This yields, instead of (A-7),
~*
* * *
h ( s) :
=
fL ( s) / [ 1- f L ( s) f D ( s) ] . (A-10)This result is plausible, since, as compared to (A-4), in h(t) each random variable under consideration is shortened by D.
Next we derive the Laplace transförm of the availability A(t) of a component, whose changes of state (good or bad) occur according to an
ordinary alternating renewal process. Since i t is plausible to assume that the component under consideration was "good" at ·t=O , we must have an even number of renewals (of any kind) between O and t to have the component in a "good" state at t ; Hence~ the availability is
00
A(t) =
2
P{N(O,t) = 2i}.i=o
By (A-2) and the integration rule of the Laplace trans:Borm1>
CX) 1
* *
2)I -
[f2i (s) - f2i+1 (s)] · i=o 5""
1
* \
= s [
1- fL ( s) ] li=o
i [fL (s) fD (s)]
(A-11 ),
(A-1 2)
Asymptotical results for the case t • 00 , i.e. the stationary case,can easily be gained by using the limit theorem
"f(oo) := lim "f'(t) = lim [ s
f ( *
s)l.
(A-13)t • oo s• o
To gain stationary values for the renewal density and for the availa- bility we use the definition of the Laplace transform
1) Division by s corresponds to timi domain integration.
2) From Fo(t)=l (t>O) there follows fo(s)=l.
00
'f ( *
s ) :=
J"f (
t ) exp ( - s t) d t0
00 00
=
f"f
(t)dt - s J t ;t,'(t)dt + o(s).0 0
If
y
is a pdf f y , and Ye;;o, then by normalization00
and by general definition of the mean value (continuous case)
00
f t fy(t)dt
=
E(Y).0
Hence, by (A-13)
*
fy(s)
=
1 - s E(Y) + o(s).=
1+ O(s).Now, at last, we can calculate h(00 ) and A(00 ) . From (A-7)
h(00 )
=
lim {[1+ O(s) ]/[E(L+D) + o(s) ]}s
= 1
E(L)+E(D)
Likewise from (A-12) A(00)
=
limS-+0
E(L)+ o(s)/s E(L+D)+o(s)/s
E(L)
= =E...,..( L__,)""""+-=E:--:(=D...-)
(A-1 4)
(A-15a)
(A-15b)
(A-16)
(A-17)
(A-..,1 ~)
9 REFERENCES
Cox D., Isham V.: Point processes. London 1980.
Chapman and Hall,
[2] Cox D.: Renewal theory. London: Methuen, 1962.
[3] Singh C.: Calculating the time-specific frequency of system failure. Trans. IEEE, vol. R-28 (1979), 124-126.
[4] Khintchine A.: Mathematical methods in the theory of queueing.
London : Griffin, 1969.
[5] Papoulis A.: Probability, random variables and stochastic processes. New York : McGraw-Hill, 1965.
[6] Schneeweiss W.: Calculating the probability of Boolean expres- sion being 1. Trans. IEEE, vol. R-26 (1977), 16-22.
[7] Schneeweiss W.: Computing failure frequency, MTBF & MTTR via mixed products of availabilities and unavailabilities.
Trans. IEEE, vol. R-30 (1981), 362-363.
[8] Huang X.: Calculating the failure frequency of a repairable system. Microelectronics & Reliability 22 (1982), 945-947.
[9] Singh
c.,
Schneeweiss W.: Calculating the frequency of Boolean expression being 1 . Trans. IEEE vol. R-26 ( 1977)354-355, 358.
[10] Höfle-Isphording U.; Zuverlässigkeits-Rechnung (reliability calculus). Berlin : Springer 1978.