• Keine Ergebnisse gefunden

Probabilistic confidentiality properties based on indistinguishability

N/A
N/A
Protected

Academic year: 2022

Aktie "Probabilistic confidentiality properties based on indistinguishability"

Copied!
12
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Probabilistic Confidentiality Properties based on Indistinguishability

Thomas Santen Institut f¨ur Informatik

Westf¨alische-Wilhelms Universit¨at M¨unster, Einsteinstr.64, 48149 Munster¨ santen@acm.org

Abstract:This paper motivates threeconfidentialityproperties based on thenotion of indistinguishablebehavior induced byadversaryobservations of nondeterministic and probabilistic systems.Concealed behavior is a possibilistic property,whereasensured entropyand bounded risk areprobabilistic properties. In contrast to noninterference- likeinformationflowproperties, theseproperties do not primarilyaim at restricting informationflow, but on keeping thedifferences between indistinguishablebehavior confidential.To support theprobabilistic definitions, theconcept of theprobabilityof a tracegiven an observation is clarified for systems permittingexternal, nondetermin- istic, and probabilistic choice.

1 Specification of Confidentiality

Thequestion of specifying and analyzing confidentialityproperties of IT-systems has at- tracted considerableattention in theresearch communityover thelast decades. Experi- ence with informationflowanalysis over program code[Den82] and theBell-La Padula model of multi-level security[BL75] shows that it is desirableto formulateconfidentiality propertiesindependentlyofmodelsof an IT-system. Such models can berather abstract descriptions ofessential system behaviorexpressed,e.g., bylabeled transition systems or process calculi, or theycan be verydetailed models such as program sourcecode.

Conceptuallydistinguishing confidentialityproperties from themodelswhich mayor may not satisfythoseproperties has two advantages.First, it makes it possibleto theoretically investigatetherelationships between differentvariants of confidentialityproperties and thesystems satisfying them.This includes thepossibilitytovalidatea particular property against models that areintuitively supposed to be“secure” or “insecure”. Second, this distinction allows system and software engineers to specify the specific confidentiality requirements on a system independentlyof a particular system description.Itshouldalso allowthem to tracesuch a propertyfrom the first analysis models of a system down to its eventual implementation, although fewpublications addresses this problem.

In thelast two decades, considerableresearch has investigated theconfidentialityproperty ofnoninterference,which basicallyrequires that onepart of thesystem (“Low”) must not get information about thegoings on in theother part of thesystem (“High”). Follow-

(2)

c b a

w Bob

Alice

Yves System P

l2

l1

Alice

Yves

ASender ANet AReceiver ATrans

inp out

w

Bob

Figure1:System ModelwithAdversaryand a Data Transmission System.

ing Goguen and Meseguer’s seminal paper [GM82], quitea number ofvariants of non- interferenceconsidering systemswith different properties, such as non-determinism, and considering different formulations of noninterference(possibilistic and probabilistic) have been published. Recently, Mantel [Man03] clarified therelationships between the vari- ants of possibilistic noninterference. With hisModular AssemblyKitforSecurityProp- erties(MAKS), heidentified basic properties that can beconjoined to producedifferent

“noninterference-like”properties.

All of theseproperties areinformationflowproperties.They essentiallyrequirethesystem to prevent informationflowfrom High to Low:Lowmust not getanythingto knowabout High’s behavior (through thechannels that theparticular propertyaddresses). For many applications, in particular in a civil multi-laterallysecuresetting, such a requirement is unrealistic [Rus01]: An implemented systemwill always leak information from High to Low.Preventing those flows maynot onlybetechnicallyhard to achievebut theymaybe strictlyunavoidable.

This paper proposes a familyof confidentialityproperties that do not aim at preventing or bounding informationflowas such. Theyrather requirethesystem to protect differences between system behaviors based on theobservations that an adversaryinevitablycan ac- cumulateduring a particular run of thesystem. In particular for possibilistic properties, thisviewis not new[ZL97], but thefocus is different:Theproperties presented heredo notessentiallyrelyon a set of confidentialevents, and their definitions areuniformly ex- pressed in terms of sets of indistinguishablebehaviorswith thepossibilistic propertyas a necessarycondition of theprobabilistic ones.

2 System Model

Aconfidentialityproperty makes a proposition about therelationship of a system to an adversary. Theleft hand sideof Figure1 illustrates thesystem model that is thebasis to defineconfidentialityproperties in thefollowing sections.This model has been introduced before[HPS01, SHP02].

AprocessPdescribes thebehavior of a system in terms of theinteractionwith thesystem

(3)

environment,which comprises users and possiblyother systems. Thesystem communi- cateswith the environment througheventson communicationchannels.Thechannelcon which anevent takes placeand thedatadtransmitted characterizeaneventc d. To avoid somecomplexityrelated to unbounded nondeterminism,weassumethat thesets of chan- nels and data are finite.Hence, a process produces onlyafinitenumber of distinctevents, thealphabetof theprocess.

AprocessPproduces sequences ofevents. Atraceis afinitesequenceofevents.Theset tracesPof all tracesPcan produceis thetrace semanticsof that process.

We areinterested in system models that serve to specify intended system functions as well as to describeactual implementations. Therefore, thesystem processPmaychose alternativebehavior in threepossible ways: withexternal choice, it may offer alterna- tivebehavior for the environment to chosefrom;withprobabilistic choice, it maychose alternativebehavior internallyaccording to a given probabilityfunction; andwithnonde- terministic choice, it mayleaveopen thedecisionwhether to resolvea choice externally or probabilistically.

Both, external choiceand probabilistic choicecan implement nondeterministic choice.

Thus, nondeterministic choiceis a means of leaving implementation decisions open in a specification. We assume that afinal, executableimplementation isprobabilistically deterministic, i.e., it does not contain anynondeterministic choice.

Thesystem model sketched in this section has a formal semantics in terms ofProbabilistic Communicating SequentialProcesses (PCSP) [MMSS96]. Aprobabilistic process is a function assigning standard (non-probabilistic) processes a probability.Thesemantics of standard processes is thefailure-divergencesemantics of (standard) CSP [Ros98].

Thedetails of that semantics and thetechnical definitions of theconfidentialityproperties will bepublishedelsewhere[San05].Thepurposeof this paper is to motivatetheconcepts from a security perspectiverather than toelaborateon thetechnicalities of that formal framework. For that purpose, it suffices to knowthat a process produces a set of traces.

For probabilistic processes,weneed to assign probabilities to certain traces of a process, which is thetopic of Section 4.

3 Adversary Views

To fulfill its intended function, thesystem interactswith itsenvironment throughfunctional channels.In the figure, thefunctional channelsa,b, andcallowtheusersAlice, Bob, and Yves to communicate with thesystem.

Theadversary window wis a distinguished channel modeling an adversary’s means of observing therunning system. Allevents onwareobservations that apassiveadversary maymakeof thesystem activity without interferingwith thesystem.In the figure, Yves is anactiveadversary who has access not onlyto the windowchannelwbut also to thefunc- tional channelc.Thus, Yves can influencethesystem behavior,even takepart in ordinary communicationwithAliceand Bob, and at thesametime watch thesystem through the

(4)

adversary windoww.

Theadversary windowis not a functional channel.It is not part of thebehavioral descrip- tion of thesystem.Therefore, theadversarycannot activelycommunicate with thesystem throughwbut can only watchevents onwthat thesystem generates.

Theadversary windowprovides a means of describing adversaryobservations beyond the obvious ones possiblethrough theadversary’s functional channels. Thesecan include events monitoring system activityorevents providing information obtained by eavesdrop- ping on communication lines between thesystem andAliceor Bob.

Thesituation described in Figure1 relates to a particularadversary view P AI w,which consists of thesystem processP, theadversary interface AI comprising all functional channels towhich theadversaryhas access (in Fig.1,AI c), and theadversary win- doww.Theadversary viewisconsistent1if allevents on theadversaryinterfaceAIhave a counterpart on theadversary windoww. For a consistent adversary view, analysis con- cerning confidentialitycan focus on theadversary windowbecauseit provides a complete pictureof theinformation theadversarycan gain about the executing system.In thefollow- ing, all adversary views aresupposed to beconsistent.In a multi-lateral setting, different adversary views mayserveto model theconfidentialityconcerns of different stakeholders.

Thedefinitions of confidentialityproperties in thefollowing sections arebased on the classes of indistinguishablesystem traces that an adversary windowinduces.Let P AI w bean adversary view. Two traces s t tracesPareindistinguishable by w (denoted swt) if their projections toware equal:

swts wt w wheres wis theprojection ofsto thesequenceofevents onw.

For a tracet, the equivalenceclass

w

tracesP wcomprises all tracesswithswt.

Theset w P denotes all observations theprocess Pcan produceon w. Given an observationow P, theset2IwP ocontains all traces producing theobservationo.

IwP o ttracesPt wo (1)

Obviously, an adversarycannot directlyobservethedifferences between two members of IwP o. If thereis a traceof Pproducing theobservationo, thenIPw o andIwP o tracesP w.Theset ow PIwP oof all indistinguishabilityclasses partitions tracesP.

Consider thesimpledata transmission system on theright hand sideof Figure1. Yves eavesdrops on the communication between Aliceand Bob through thesystemATrans, which consists of a sender, thenetwork, and a receiver component.Supposethatwtrans- mits to Yves thelength ofeach message Alicesends to Bob,which of coursealso conveys theinformation thatAlicedoes send a messageto Bob at theparticular point in time. Al- though quiteabstract, this model reflects thetypical situation ofencrypted communication over an open, insecurenetwork.

1Asuitableinjectivemapping makes this notion precise.

2Weusetheset comprehension notation from thespecification language Z.Theset x Xy YPx y tx ycomprises allvalues of thetermtx yforxX,yYsatisfyingPx y.

(5)

Depending on the waythesystem controls message“lengths”, a considerableamount of information may flowto Yves through channel w. Nevertheless, in practiceit often is neither desirablenor feasibleto prevent such a channel to theadversary.It is then important to analyze whether that channel makes thesystem insecurein the envisaged context of use.

To consider indistinguishabilityclasses as thebasis for definitions of confidentialityprop- erties is not new. Zakinthinos and Lee[ZL97] call indistinguishabilityclasseslow level equivalence sets(LLES).Theygivea definition of a (possibilistic) security propertyas onethat can berecast as a propertyholding foreach indistinguishabilityclass and show that several informationflowproperties can bedefined as properties of thoseclasses.

4 Probability of System Behaviors

Sections 6 and7introduceprobabilistic confidentialityproperties.In particular, theycon- sider theprobability toof a tracetunder thecondition that a processPproduced the observationo, i.e., thesequenceofeventsoon theadversary windowofP.

Adefinition of tois non-trivial becauseit must answer thefollowing four questions:

Howto assign a probabilityfunction to a set of traces that is not prefix-free, i.e. that may contain tracest½andt¾wheret½is a prefixoft¾?Howto assign a probabilityfunction to thepossiblyinfiniteset of traces that mayproducean observationo?Howto assign prob- abilities to thetraces that a nondeterministic choiceproduces?Howto assign probabilities to thetraces that anexternal choiceproduces?

The first two questions directlyrelateto thetracesemantics of a process:ThesettracesP is prefix-closed, becausea process can only producethelastevent of a tracet after it has produced allevents preceeding that oneint, i.e., all prefixes oft. Furthermore, the observationo t wproduced byton theadversary windowwis theprojection of t to thesequenceof events in t on thechannelw. Thereversemappingyields theset of tracesIwPoproducing theobservationo on thechannelw. This set may beinfinite, becausePmaygenerateunboundedlymany events on its functional channels between two observations onw.

It is not obvious howto probabilisticallydistinguish a tracetfrom its prefixes becausethe system mayhalt after producing a prefix, or it maycontinueproducingevents to complete t. In the first case, a probabilityfunction needs to assign a distinguished probabilityto theprefix whereas in thesecond case, both, theprefixandtbelong to thesamestochastic event and thereforemust not bedistinguished probabilistically.

Asolution to theseproblems lies in restricting theadversary with respect to theduration of an observation, i.e., themaximal lengthkof thesystem traces that areconsidered inIwPo. Under this restriction, theprobabilityof a tracet½with a length strictlyless thankcan be considered theprobabilityof thesystem producingtand halting afterwards,whereas the probabilityof a tracet¾extendingt½to a total ofkevents is theprobabilityof thesystem nothalting aftert½but producing thefull tracet¾.Given afinitealphabet, theupper bound kalso makesIwPofinite, and thus makes thequestion of asking for theprobabilityof a tracegiventhe observation osensible.

(6)

Theupper boundkon thelength of behaviors inIwP oreflects a realistic restriction on the adversary.Theadversarycan only wait for some finitetimeto observeoand completethe observation bydeciding not towait for another observable event.

Thelast two questions refer to thefact that assigning a probabilityto a traceonlymakes senseif thechoiceofeachevent in thetraceis a probabilistic one. Therefore, theother two kinds of choicemust beresolved probabilistically. Both, probabilistic choiceand external choicecan implement nondeterministic choice. Therefore,weneed to consider allpossible ways of implementing nondeterminism inP,which produces thesetP of variants3ofPwith onlyprobabilistic orexternal choice.

Onlythesystemenvironmentcan resolveanexternal choiceprobabilistically. Therefore, asking for tois sensibleonlyunder theassumption that thesystemenvironment has a particular stochastic behavior. Theusermodel EUis a process modeling thestochastic behavior of thenon-maliciousenvironment, i.e., it communicateswith thesystem on the functional channelsnotin theadversaryinterfaceAI. Of course, itwould beinadequate to restrict theadversaryto a specific behavior.Therefore, theconfidentialitypropertieswe defineconsider all processesEA modeling adversarybehavior at theadversaryinterface AI such that theparallel compositionR Q w EU EAkof avariantQ P with thetwoenvironment processes4produces a completelyprobabilistic process that does not admit any external or nondeterministic choice, and that diverges afterkevents. An environment process EU EAresolving allexternal choices in Qprobabilistically is calledadmissibleforQ.

In summary, to definetheprobabilityto, it is necessaryto augment an adversary view P AI wwith an upper boundkon thelength of system runs andwith a stochastically behavingenvironmentEU. Theadversary model P AI w k EUthus captures assump- tions on thebehavior of thenon-maliciousenvironment and theadversary’s observational power.

5 Concealed Behavior

The first confidentiality property, calledconcealed behavior, is a possibilistic property.

It rephrases indistinguishabilityfrom an indicativeproposition (“thesystem conceals the differences of traces in an indistinguishabilityclass”) to an optativeproposition requiring thesystem to keep certain traces indistinguishable.Prescribing certain sets of traces to be kept indistinguishableis themeans to definethat property.

A mask for a processP is a set of pairwisedisjoint subsets of thetraces ofP. A mask neednotpartitiontraces P.Thesystem is required to conceal thedifferences of the members of a set into the extent specified bya specific confidentialityproperty.There is no confidentialityrequirement for thetraces ofPthat arenot contained in

Ë

.These traces are“don’t cares.”

3Thesearethemaximal refinements ofPin PCSP.

4The environment communicateswithQon thefunctional interfaceonly. It does not synchronizeon the adversary windoww,which thearrowpointing atwsymbolizes in thecomposition operatorw.

(7)

For thedata transmission system in Figure1,wecould requireall traces consisting of three messages thateither servetoexchangea session keyor to transmit dataencodedwith a session keyto beindistinguishable. Then a setM¼

would consist ofexactly the traceswithevents on thechannelsinpandoutthat areneeded toexchangethosemessage triples.M¼would, however, not prescribetheobservations onwdirectly, becauseit must bedefined independentlyof theadversarymodel.

Theconfidentialitypropertyof concealing a maskdirectlyrelates to theset theoretic properties of indistinguishability.If differences between thetraces in a setM areto bekept confidential, theymust–at least–beindistinguishable.Thefollowing Definition 1 formalizes that propertybyset inclusion:MIwR o,wheretheprocessRis thesystem Pin a suitable environment.

That set inclusion not onlyrequires thesystem to producethesameobservationofor all members ofMifthey occur, but it also requires theprocessRto becapableof producing all members ofM. This is a non-trivial requirement becauseRconsists ofPcomposed with anenvironment process and, therefore,Rmayproducefewer traces thanPdoes.

IfRcould not produceall members ofMbut, in the extremecase, just oneof them, then adversaries could infer the exact system behavior from a given observation on thebasis of their knowledgeabout thesystemenvironment. Arguably, that possibleinferencebased on a priori knowledge would not reduceconfidentiality –after all, theother members ofM could not occur anyway.But an adversarymight beableto modulatethesystem behavior such that many members of anM would becomeimpossible. Then guessing the remaining ones would becomparatively easy. Hence,we requirea set of classes to cover:

MIMI MI (2) Asexplained in Section 4, thedefinition of concealed behavior also needs to consider all adversarybehaviorsEAfor a given user modelEU.

Definition 1(Concealed Behavior)

Letbea set of mutuallydisjoint sets of traces overwith a length of at mostk. The adversarymodel P AI w k EUconceals,writtenConceals P AI w k EU, if

EA EUEAadmissibleforP k

letR Pw EUEAkow RIRw o (3)

Concealed behavior is a propertyof sets of traces. It is aclosure property[McL96] and thereforecannot be expressed as a propertyof system traces.

An adversarymodel does not necessarilyconceal theset of its indistinguishabilityclasses

ow PIPw o, becausethe environment can prohibit certain behavior such that theprocessRproduces onlya subset ofIwP ofor someo.If theuserenvironmentEU

(8)

and all adversary environmentsEAadmitPto producea classIwP oeither completelyor not at all, then theadversarymodel does conceal .

Thefollowing observations5clarifytherelationship between concealed behavior and non- interference-likeinformationflowproperties that can be expressed in Mantel’s MAKS.

MAKSexpresses informationflowproperties as conjunctions ofbasic security predicates (BSP) related to aview C N Vpartitioning the eventsEof a system into confiden- tial onesC,visibleonesV, and non-visiblebut not confidential onesN.Each BSP has the following form,wherethepredicateSÎ

cvimplies thatis a prefixof . BSPÎ Tr

Tr cC vV E£SÎ cv ¼ TrTÎ ¼ cv (4) Thus, a BSP basicallyrequires that, under certain conditions, there exists sometrace ¼ Tr for aeach Tr. This can berepresented by a system of masksÎ Tr. The members ofÎ Trcorrespond to thedifferent choices of ¼in (4).

BSPÎ traces RÎ traces R ow RIwR o

(5) It is an implicit assumption underlying MAKS that theatomicentities of confidential in- formation are events (gathered inC). With a mask, in contrast, it is possibletoexpress confidentialityrequirements on thesequencingofevents. It is not obvious that this kind of requirement can always bereduced to oneon single events.

6 Ensured Entropy

WhyDefinition 1 insists that a member of a mask is contained completelyin an indistin- guishabilityclass becomes clearwhenweconsider probabilistic confidentialityproperties.

If theadversaryknows about thestochastic behavior of thesystem, then concealed be- havior is aweak confidentialityrequirement.In thedata transmission system of Figure1, supposeYves observes thesequence.If Yves knows that a tripleof messages with thoselengths is characteristic forexchanging a session keybetweenAliceand Bob, with thesecond messageholding thesession key, then Yves can infer at least thatAlice and Bob areabout to start a newsession. For being “characteristic”for key exchange, it suffices to knowthat thereis onlyachance, say, that a tripleof messageswith those lengths isnota key exchange.

This observation motivates to defineanother confidentialitypropertybased on theadver- sary’s uncertaintyabout thetruesystem behavior behind an observation. Ameasureof that uncertaintyis theentropyHw Roof theindistinguishabilityclassIRw ogiven an ob- servation oof a processR,which is a fullyprobabilisticvariant ofPas defined in Section 4.Theconditionalentropyis defined by

Hw Ro

t¾IwRo

to

to (6)

5Thelimited spacedoes not allowus toelaboratethat relationship in detail.

(9)

Theconfidentialitypropertyofensured entropyrequires that there isa probabilistically deterministicvariantQof thesystem processPthat, put in a suitable environment, guar- antees lower bounds on the entropy of theindistinguishabilityclasses covering a given mask .

Definition 2(Ensured Entropy)

Let Ê map classes of to possible values ofentropy. Theadversarymodel ensuresthe entropyfor ,writtenEnt

P AI w k EU, if

Q PEA EUEAadm.forP kletRQwEUEAk o wRIRwo

M o wRMIwRo MHwRo

(7)

Thefact that Definition 2existentiallyquantifies overQneeds furtherexplanation. The refinementsQofPcontained inP arepossibleimplementations of thespecificationP.

One wouldexpectallrefinements ofPto satisfy“usual”functional requirements,which can beformalized by properties of traces, thatPsatisfies. The well-knownrefinement paradoxshows, however, that resolving nondeterminism maycompromiseconfidentiality propertieseven in a possibilistic setting. This is also truefor our probabilistic properties.

Therefore, Definition 2 only requires thatatleast one(functionallycorrect) implemen- tation ofP ensures thespecified lower bound on the entropy of its indistinguishability classes.If this propertyis preserved in theimplementation process refiningPto somePn

in several steps, andPndoes not haveanynondeterministic choices, thenPn

is a singleton andensures thespecified lower bounds on the entropy.

In the example, putting a lower bound on the entropy of IwR requires the system to producemessagetriples of theselengths that arenot key exchangemessages with sufficiently high probabilities. Thus, a system satisfying that property keeps key exchangemessages better confidential than onethat does not.

7 Bounded Risk

Thefollowingexampleillustrates a situationwhereanentropybased confidentialityprop- ertymaynot beadequate.For a probabilityp, consider a steganographic data transmission systemStSystempthat hides a messagemwithin somecarrier datacin thecommunication of two parties bymerging thosetwo data sets probabilistically:Foreach transmitted bitdi, thesystem chooses thenext data bitmjand transmitsdidatamjwith probabilityp, or it chooses thenext carrier bitckand transmitsdi carrierckwith probabilityp. Apassiveadversaryobserves thecommunication of thetwo parties but cannot distinguish messagebits from carrier bits, i.e., theadversaryobservesmjorckwithout themarkerdata orcarrier. Given a sequenceof observed bitso, theindistinguishabilityclassIwStSystempo contains all traces containing thesamesequences of bitsowithvarying markersdataor

(10)

carrierfor theindividual bits. Thus, an indistinguishability class effectively hides the sequences of markers from direct observation bytheadversary.

The entropies ofStSystem¼¼ andStSystemgiven an observationoareidentical, be- causeboth systems hidethemarkersdataorcarrierequally well.Nevertheless, one would considerStSystem “better”thanStSystem, becausefor thelatter theadversaryhas good chances of discovering much of themessagembyassuming thateach transmitted bit is a messagebit,whereas guessing thateach transmitted bit is a carrier bit forStSystem

does not help theadversaryto discover themessagem.

In a situation likethis, a confidentialitypropertythat does not treat all differences between members of an indistinguishabilityclass alike is called for. The propertyof bounding therisk of an adversary’sestimation of thetruesystem behavior based on an observation makes that distinction possible.

ABayesian adversary model P AI w k EU Lis oneaugmented with a loss function L tracesP tracesP Êthat assigns real-valued losses to pairs of system traces.

Theriskof anestimatione ogiven an observationois the expected loss of thatestimation Lw eo

t¾Iwo

L e o tto (8)

ABayesian estimatore£is onethat minimizes theriskLw eofor all observationso.

oLw e£o eLw eo (9) Thecrucial parameter of a Bayesianestimator is theloss function.It reflects the valuean adversaryassigns to thedifferences of indistinguishablebehavior.Morepreciselyspeak- ing, theloss function models thestakeholders’assumption about theadversary’sesteem.

This, of course, reflects thestakeholder’sesteem of thedifferences between indistinguish- ablesystem behavior.Thus, theloss function is part of theadversarymodel.

Like entropy, theBayesianestimator quantifies thedegreetowhich thesystem keeps dif- ferences between indistinguishablebehavior confidential. TheBayesianestimator– by wayof theloss function–distinguishes theparticular behavior that theadversary will in- fer from an observation.Therisk assigned to thatestimation also measures the expected distancebetween theaspects of thetruesystem behavior that theadversaryis interested in and the estimation.Thus,wecan interpret theminimum risk Lw e£oas a measureof howgood thesystem keeps confidential theinterestingaspects of indistinguishablesys- tem behavior,which theloss function distinguishes:thehigher the expected loss is for the adversary, thebetter thesystem keeps its secret.

Similar toensuredentropy, theconfidentialitypropertyof bounded risk puts lower bounds on theminimalrisk in a Bayesian adversarymodel.

Definition 3(Bounded Risk)

Let P AI w k EU Lbea Bayesian adversarymodel, and let Êmap classes of to riskvalues forL. Theadversarymodelboundstheriskof Bayesian estimation within by , writtenRskÅÊ P AI w k EU L, if for allM therisk M is

(11)

a lower bound to theBayesian risk assigned toe for each indistinguishabilityclass of P AI w k EUcontainingM.

QPEA EUEAadm.forP kletR Qw EUEAk

ow RIRw o

Mow RMIwR oMLw e o

(10)

In the example, considering themap of all indistinguishabilityclasses and putting a uni- form lower bound of Mto therisk associatedwitheach class,StSystempbounds therisk byfor anyprobabilitypranging fromto.For thoseprobabilities, anes- timationwith a minimal risk can be expected to haveat least two morebitswithwrongly assigned markers than it has correctlyassigneddata-markers.

8 Conclusions

Theconfidentialityproperties introduced in this paper providean approach of specify- ing confidentiality with someuniformitybetween thepossibilistic and theprobabilistic properties.In contrast to standard possibilistic properties, concealed behavior requires the system to possiblyproducealland not just onealternativebehavior specified in themem- bers of a mask. Thisfitswellwith theprobabilistic propertieswhich restrict therelative probabilities of thetraces in themembers of a mask.

Wehopethatensuredentropyand bounded risk arebetter suited to capture“civil”confi- dentialityrequirements than probabilistic noninterference[Gra91] is, becausetheydo not aim at blocking all covert channels but accept thefact that thereusuallyis a channel to adversaries of considerablecapacity. In a refinement-based setting, thoseproperties can beused to specifyallowedadversaryobservations abstractly, and to describeunavoidable adversaryobservations in an implementation. Therefinement relation then mustensure that theunavoidableobservations do not compromisetheamount of confidentialityspec- ified for system behavior that is indistinguishable with respect to theallowed (abstract) observations. Such a refinement based development methodologyhas been sketched in previouswork [HPS01, SHP02]. Thetechnical definition of a data and behavioral refine- ment preserving thosepropertieswill bepublished shortly[San05].

Aswith anystatistic decision theory, crucial questions areto determinerealistic proba- bilityfunctions describing thestochastic behavior of thesystem and itsenvironment, and to specifyadequatebounds forensuredentropyand bounded risk. Further research must investigatehow stable ensured entropy and bounded risk are under (small) changes of theuserenvironment behavior, becausethestochasticenvironment modelwill always be a best guess (as in any“Bayesian”approach), and a security verdict should not depend on smallerrors in that model. Concerning thespecified bounds,we envisagea develop- ment methodologythatwill systematicallyadjust thebounds to reconcilesecurity wishes (“100%confidentiality”)with practicallyachievablesecurity.

(12)

Acknowledgment. Thanks to Heiko Mantel for raising interesting questions during a brief meeting in Dresden, toAndreas Pfitzmann for sharing hisviews on security. The anonymous reviewers providedvaluablecomments and criticism, and Maritta Heisel com- mented on a draft of this paper.

References

[BL75] D.Bell and L.LaPadula. SecureComputer System: Unified Exposition and Multics Interpretation.Technical Report MTR-2997Rev.1, MITRE Corporation, 1975.

[Den82] D.Denning.Cryptography and Data Security. Addison-Wesley, 19982.

[GM82] J.A.Goguen and J.Meseguer. SecurityPolicies and SecurityModels. InIEEE Sympo- sium on Security andPrivacy, pages 11–20.IEEE Computer SocietyPress, 1982.

[Gra91] J. W.Gray, III. Toward a mathematical foundation for informationflowsecurity. In Proc. IEEE Symposium on Security andPrivacy, pages 21–34, 1991.

[HPS01] M.Heisel,A.Pfitzmann, and T. Santen. Confidentiality-Perserving Refinement. In 14thIEEE ComputerSecurityFoundationsWorkshop, pages 295–305.IEEE Computer SocietyPress, 2001.

[Man03] H. Mantel. AUniformFrameworkfor theFormal Specification andVerification of InformationFlow Security.PhD thesis, Universit¨at des Saarlandes, 2003.

[McL96] J.McLean. AGeneral Theoryof Composition for a Class of “Possibilistic”Properties.

IEEETransactions on Software Engineering, 22(1):53–67, 1996.

[MMSS96] C.Morgan,A.McIver, K.Seidel, and J. W.Sanders. Refinement-Oriented Probability for CSP.FormalAspects of Computing, 8(6):617–647, 1996.

[Ros98] A. W.Roscoe.TheTheory andPractice of Concurrency.PrenticeHall, 1998.

[Rus01] J.Rushby. SecurityRequirements Specifications: HowandWhat? InProc.Sympo- sium on Requirements Engineering for Information Security (SREIS), Indianapolis, IN, March 2001.

[San05] T.Santen. Security Engineering:RequirementsAnalysis,Specification andImplemen- tation. TechnischeUniversit¨at Berlin, 2005. Habilitationsschrift, to appear.

[SHP02] T.Santen, M.Heisel, andA.Pfitzmann.Confidentiality-Preserving Refinement is Com- positional–Sometimes. In D.Gollmann, G.Karjoth, and M. Waidner,editors,Com- puterSecurity–ESORICS 2002, LNCS 2502, pages 194–211.Springer-Verlag, 2002.

[ZL97] A. Zakinthinos and E.S.Lee. Ageneral theoryof securityproperties. InProc. IEEE Symposium on Security andPrivacy, pages 94–102, 1997.

Referenzen

ÄHNLICHE DOKUMENTE

We use Erd¨ os’ probabilistic method: if one wants to prove that a structure with certain desired properties exists, one defines an appropriate probability space of structures and

The following theorem (also from Chapter 2 of slides) has an analogous formulation..

From the phase diagram it was observed that the anionic solutions with aqueous C 14 TAB formed two vesicular compositions, one is with anionic rich (L ves– phase) and second one is

Finally, given the evidence that equity markets around the world appear to move more or less in tandem, even if fundamentals cannot explain the rising correlations in stock

2 In particular we do not allow that all voters cast abstain/negative votes for all candidates. With this requirement we avoid stating that all candidates must be elected in case

Present policies of financial sector support are the inverse of a Clean Slate – they artificially maintain debt claims by keeping so many creditors in business to

The events in Egypt that occurred after 3 July when the army deposed President Mohamed Morsi then crushed the Muslim Brotherhood’s counter demonstrations, resulting in hundreds

Recently the RIFLE criteria have been modified by the Acute Kidney Injury Network (AKIN) [4], and the change in eGFR is no longer included in this consensus definition set..