• Keine Ergebnisse gefunden

In this thesis, we will not further focus on BPM itself, but only on the closely related analysis technique of process mining that mirrors many BPM concepts like workow perspectives and patterns (see Section 4). For relations of BPM to further topics considered in this thesis, we refer to the literature on workow simulation (e.g. Rozinat et al., 2009c) and agent-oriented workow management systems (e.g. Reese, 2009).

Following the often-cited idea of a "virtual laboratory", the experimental setup describes the control and observation apparatus applied in an experiment. This includes settings of experiment control parameters (e.g. simulation duration) on the one hand, and the setup of observers and analyses on the other hand. The execution of an experiment specication within an experimental setup leads to a series of simulation runs (see also Wittmann, 1993, p. 56).

2.4.1.1. Experimental Design

The main goal of experimental design is to evaluate a preferably wide range of simulation model behavior by simulating a possibly small number of parameter congurations, also called scenarios (Page and Kreutzer, 2005, p. 190). In manual experimental design this goal is achieved through systematic parameter variations. A common approach is the2kfactorial design (Page and Kreutzer, 2005, p. 190). In this design, we identify a characteristic high and low value for each of the model's k parameters.20 We then perform a simulation run for each combination of parameter values leading to a total of 2k runs. More advanced techniques for experimental design are e.g. presented by Law and Kelton (2000, Ch. 12).

The main technique for automated experimental design is simulation-based optimization (see e.g. Page and Kreutzer, 2005, pp. 190 and Ch. 13) which is used to automatically optimize scenarios that are too complex for analytical optimisation. Simulation and optimization tech-niques are integrated as follows (Page and Kreutzer, 2005, Sec. 13.2): Given a model class, an initial parameter conguration is chosen, and a simulation of this scenario is run. The results of the simulation are then evaluated by means of an objective function. Based on this evaluation, an optimization algorithm tries to compute a 'better' conguration that is again evaluated in a simulation run. This iterative process usually continues until the objective value converges.

Note that simulation-based optimization is not guaranteed to nd an optimal conguration due to the use of (stochastic) simulation and often heuristic optimization techniques (e.g. genetic algorithms, see Gehlsen, 2004).

2.4.2. Output Analysis

Law and Kelton (2000, pp. 496) note that the proper output analysis of (stochastic) simulations is an often neglected aspect in practical studies. In contrast, many textbooks largely emphasize techniques for statistical analysis (examples include Law and Kelton, 2000, Ch. 9-11; Banks et al., 1999, Ch. 12-13). However, the diversity of analysis techniques applied in simulation exceeds mere statistics since informal as well as formal techniques from several elds can be applied. The classication scheme in Figure 2.13 shows one possibility to structure the dierent analysis techniques applied in simulation.

The well-known distinction of statistical analysis techniques into the exploratory and the con-rmatory approach is also relevant in simulation (see e.g. Köster, 2002). Exploratory techniques are applied to gather knowledge about a model's structural or behavioural features, while conrmatory

20The2k factorial design is thus related to software engineering's equivalence partitioning and extreme input testing (see e.g. Balci, 1998, pp. 370).

Approach Data Analysis

time Degree of Purpose

formality Analysis

techniques

Confirmatory

Exploratory

Trace-based

Result-based

Online (at runtime)

Offline

Qualitative (informal)

Quantitative (statistic)

Symbolic (formal)

Validation

Single system analysis

Comparison of alternatives

Figure 2.13.: A classication scheme for analysis techniques. The scheme was derived from several sources in the literature and from our classication of validation techniques presented in (Page and Kreutzer, 2005, p. 211; see also Figure 2.16).

techniques serve to test [...] pre-established hypotheses (Page and Kreutzer, 2005, p. 210) that rep-resent expectations on a scenario (in model comparisons) or knowledge about the real system (in validation).

Following Ritzschke and Wiedemann (1998, Sec. 1), output analyses are either based on raw event traces observed during simulation or on preprocessed results (simulation reports) pro-duced by specic data collectors in the experimental setup (e.g. average queue waiting times):

In trace-based analysis all available information are logged and subsequently ltered and aggre-gated. This allows for detailed and temporally ne-grained analyses. Furthermore, the trace can be analysed from dierent view angles without modications of the experimental setup and rerun of the simulation (Ritzschke and Wiedemann, 1998, Sec. 1). A drawback of trace-based analyses is the high computational eort necessary to process large trace les, and the reduced convenience compared to result-based analyses with specic data collectors connected to the model components (Ritzschke and Wiedemann, 1998, Sec. 1).

Analyses can either be performed after the simulation, taking into account the whole observed data set (oine analysis) or during the simulation, taking into account the currently available data (online analysis).21 Apart from animations (Page and Kreutzer, 2005, Sec. 9.6), online analyses only appear reasonable if a feedback of results into the running simulation is required.

A typical example is the reset of statistical counters after detecting the end of a simulated process' transient phase (Page and Kreutzer, 2005, pp. 174). Generally, online analyses are algorithmically more demanding than oine analyses due to the need to incrementally update the results when more data becomes available.

Another typical criterion to classify analysis techiques is the degree of formality, which is sub-divided into qualitative, quantitative, and symbolic techniques in (Page and Kreutzer, 2005,

21see e.g. Page and Kreutzer (2005, p. 242)

p. 210): Qualitative techniques are mostly based on visualization. Quantitative methods are often rooted in statistics. In this thesis we will also consider symbolic techniques from elds like data mining or formal verication (Page and Kreutzer, 2005, p. 210; see also Brade, 2003, p. 56).

Common purposes for the application of data analysis techniques in simulation include the analysis of real system data during model building, the analysis of a single simulation run, the comparison of multiple scenarios (see e.g. Law and Kelton, 2000, Ch. 9,10), and operational validation as a comparison between simulation and real system data (see Section 2.4.3).

2.4.3. Validation

When simulation models are used as a basis for decision making, it is vital to ensure that the analysis of the model leads to similar decisions as an analysis of the represented system (Page, 1991, p. 147), i.e. the model is valid (Page and Kreutzer, 2005, pp 195). In (Page and Kreutzer, 2005, p. 196), we emphasized the attention paid to validation in the simulation literature:

Following Page (1991, pp. 146) we should ideally accept model validity as one of the most important criteria for judging model quality. [...] the wide range of literature on this topic reects its importance. There are numerous papers and textbooks, which emphasise dierent aspects, such as practical techniques (e.g. Balci, 1998), statistical methodology (e.g. Kleijnen, 1999), or [...] similarities between [... simulation] validation and [...] the philosophy of science (e.g. Naylor and Finger, 1967).

Other disciplines, such as software engineering, theoretical computer science, or statistics have developed approaches [...] which are also relevant for simulation. Kleindorfer and Ganeshan (1993, p. 50) emphasize the "eclectic" character of validation in this regard [...]

2.4.3.1. Basic Terms

The following list adopted from (Page and Kreutzer, 2005, p. 196) reviews relevant terms in simulation validation based on denitions by Brade (2003, Ch. 1.5):22

• Model validation serves to ensure that a simulation model is a suitable representation of the real system with respect to an intended purpose of the model's application (Brade, 2003, p. 16 cited with minor modications in Page and Kreutzer, 2005, p. 198). Furthermore, the term validation is also [...] used as an umbrella term for all quality assurance activities (i.e.

[...] model validation, verication, and testing) (Page and Kreutzer, 2005, p. 198).

• Model verication in the wide sense serves to ensure that a model is correctly represented and was correctly transformed from one representation into another (Brade, 2003, p. 14 cited with minor modications in Page and Kreutzer, 2005, p. 198). Model verication in the narrow sense denotes the application of formal methods to prov[e ...] the correctness of model representations and their transformations (Page and Kreutzer, 2005, pp. 198).

22Actually, the denitions by Brade (2003) include the terms validation and verication. The distinction between verication in the wider and narrower sense and the notion of testing are added. A detailed discussion of dierent forms and 'degrees' of verication is led by Fetzer (2001), who uses the term verication in the broad sense (Fetzer, 2001, p. 243). A similar denition for testing from the software engineering domain is found in Whittaker (2000, p. 77).

Figure 2.14.: A rened validation process based on Balci (1998, p. 337) and Sargent (2001, p. 109). Adoped from Page and Kreutzer (2005, p. 200).

• Model testing denotes the execution of a computerized simulation model in order to corroborate that it correctly implements its corresponding conceptual model. [...] testing is regarded as an important technique for model verication in the wide[...] sense. (Page and Kreutzer, 2005, p. 199)

2.4.3.2. Validation in the Model Building Cycle

In (Page and Kreutzer, 2005, pp. 199-200) we contrasted dierent variants of the process fol-lowed to conduct a simulation study:

Many authors, e.g. Page (1991) and Sargent (2001), dierentiate between three main validation phases [in the model building cycle]:

1. Conceptual model validation is performed during the conceptual modelling phase. It aims to ensure that the model is a plausible representation of the real system; i.e.

suitable to answer all questions raised by the problem denition.

2. Model verication (in the wide sense) is performed during the implementation phase and seeks to establish that the computerized model implements the conceptual model correctly.

3. Operational model validation is conducted before and during simulation experiments.

It aims to determine how closely a model's behaviour resembles the real system's behaviour. [...] To achieve this, data collected during model execution is compared with corresponding data gathered during the real system's operation.

The more complex variant of this basic process shown in Figure 2.14has been strongly inuenced by Sargent (2001, p. 109), Balci (1998, p. 3), and the "V&V triangle" (standing for validation and verication) presented in Brade (2003, p. 62)" (Page and Kreutzer, 2005, p. 200). We will only clarify some basic principles by means of this gure. A more detailed description is provided in (Page and Kreutzer, 2005, pp. 200).

Firstly, as noted in (Page and Kreutzer, 2005, p. 200), the placement of the problem denition above the whole process indicates thata simulation model is built with respect to the study objec-tives and its credibility is judged with respect to those objecobjec-tives (Balci, 1998, p. 346 cited in Page and Kreutzer, 2005, pp. 200-201). Validation [...] can never guarantee "absolute" model validity [... but] only improve models' credibility for answering certain questions [...] by means of certain sim-ulation experiments. Zeigler et al. (2000, p. 369) refer to this endeavour as an "experimental frame".

(Page and Kreutzer, 2005, p. 201)

Secondly, as also cited in (Page and Kreutzer, 2005, p. 201), validation should be conducted throughout the whole model building process. (Page, 1991, p. 148). Every phase [of the model building cycle] must be complemented by an associated validation activity (Page and Kreutzer, 2005, p. 201) ensuring the validity of the artifacts produced in that phase. Although the process shown in Figure 2.14 is reminiscent of [. . . a] classical waterfall model, it must be stressed that model building is a strongly iterative activity (Page and Kreutzer, 2005, p. 202).

2.4.3.3. Validation and the Philosophy of Science

To put the validation of simulation models into a broader context, many authors (e.g. Naylor and Finger, 1967; Birta and Özmizrak, 1996, p. 79) cite its relation to problems considered in the philosophy of science. In (Page and Kreutzer, 2005, p. 203) we summarized these relations as well:

As Cantú-Paz et al. (2004, p. 1) point out, "computer simulations are increasingly being seen as the third mode of science, complementing theory and experiments". If we re-gard simulation models as "miniature scientic theories" (Kleindorfer and Ganeshan, 1993, p. 50), it becomes obvious that there is a close correspondence between validation of simu-lation models and the more general problem of validating a scientic theory (see Troitzsch, 2004, p. 5 cited in Küppers and Lenhard, 2004, p. 2). The latter problem traditionally belongs to the domain of the philosophy of science and has been studied extensively.

[... According to Popper's critical rationalism], the main characteristic of the so-called

"scientic method" [is the permanent] eort to falsify [...] preliminary theories. [...] falsi-cation is superior to verifalsi-cation [...], since inductions from facts [...] to theories can never be justied on logical grounds alone [...] (Popper, 1982, p. 198). We can, however, use empirical observations to falsify a theory. A single wrong prediction suces. [...]

[A more ...] practical viewpoint, proposed by Naylor and Finger (1967, pp. B-95), takes a "utilitarian" view of validation, with a mixture of rationalist, empiricist and pragmatist aspects [...]:

Figure 2.15.: Estimation of cost, value, and benet in model validation (adopted with modications from Shannon, 1975, p. 209). Figure and caption cited from Page and Kreutzer (2005, p. 207).

1. Rationalist step: assessment of intuitive plausibility of model structure. By follow-ing the rationalist approach, i.e. criticisfollow-ing a model based on well-founded a-priori knowledge, this step seeks to eliminate obviously erroneous assumptions.

2. Empiricist step: detailed empirical validation of those assumptions that have "sur-vived" the rst step.

3. Pragmatist step: validation of model behaviour by comparing model output to cor-responding output obtained from the target system (if available). In this step the model's ability to predict the real system's behaviour is tested. [...]

Using the terminology introduced [... above], the steps 1 and 2 are concerned with concep-tual model validation. Step 3 views the model as a "black box" and corresponds to [...]

operational validation [...].

2.4.3.4. General Guidelines

Due to the large number and variety of available validation approaches, it can be useful to have a list of guidelines at hand when performing practical model validation. In (Page and Kreutzer, 2005, pp. 205), we cited the following guidelines derived from similar treatments by Page (1991, Ch. 5.2) and Balci (1998, Ch. 10.3):

Degrees of Model Validity: [...] rationalists and empiricists are interested in models that explain the behaviour of systems in terms of their structure. In contrast to this, pragmatists simply view systems as black boxes and rate model quality solely on the basis of a model's predictive power.

In the simulation domain these two perspectives have led to the denition of dierent degrees of model validity, which Bossel (1989, p. 14) summarizes as [...] (cited from Martelli, 1999, pp. 88):

structural validity [...,] behavioural validity [...,] empirical validity [..., and] application validity (Page and Kreutzer, 2005, p. 206).

Scope and Eort of Model Validation: [...] the impossibility of empirical theory verication strongly suggests that the establishment of "absolute" model validity is also a logical impossibility.

This belief is conrmed by many other results [...] including the limits of formalization explored by Goedel and Turing (see e.g. Gruska, 1997, Ch. 6). [...] Shannon (1975, pp. 208) [... therefore]

stresses the need for an "economic" approach to validation activities. The pseudo-quantitative estimation in Figure 2.15 shows that value and cost of a model do not increase in a linear fashion with [...] validity. [...] In several cases simple but suitably accurate models are better than extremely detailed ones, whose complexity and data requirements quickly become intractable.

This is another example of the principle of "Occam's Razor", which [...] claims that a simpler theory with fewer parameters should be preferred [...], based on its easier testability (Popper, 2004, p. 188). (Page and Kreutzer, 2005, pp. 206-207)

• Value of Human Insight: In critical domains such as model validation, people often call for increased formality, automation, and tool-support [...]. However, according to Page (1991, p. 147),

"the application of mathematical and statistical methods in model validation is limited" and such methods typically impose strong restrictions on model representation and complexity [...

Furthermore they] only cover a narrow aspect of model validity. Brade (2003, p. 90) concludes that

"although automated computer-based validation techniques are more objective, more ecient, more likely to be repeatable, and even more reliable than human review, the human reviewer plays an extremely important role for the V&V of models and simulation results". [...] In recognition of this, proponents of formal and automated techniques [like those discussed in this thesis] should seek to develop tools whose primary focus is the support and augmentation of human modelling and validation activities. (Page and Kreutzer, 2005, p. 208)

2.4.3.5. Classication of Validation Techniques As recognized in (Page and Kreutzer, 2005, p. 210):

The simulation literature oers more (e.g. Balci, 1998) or less (e.g. Garrido, 2001) ex-haustive listings of model validation techniques [... that] originate in dierent elds [of ...]

computer science. To bring some structure into this "chaos", many authors propose their own schemes for classifying validation techniques; [... including] Balci (1998, p. 27) [...,]

Garrido (2001, p. 216) [...,] Page (1991, p. 16) [..., and] Brade (2003, p. 56) [...]

To integrate these dierent schemes into a coherent classication, we [...] arrange validation techniques along the following dimensions [based on proposals by the above authors23]:

Approach: [As in output analysis] we separate exploratory from conrmatory valida-tion techniques. [...]

Phase in model building cycle: This dimension describes whether a validation tech-nique is mainly used for conceptual model validation, model verication, or opera-tional validation; or one of the phases attached to a more sophisticated validation process.

Degree of formality: Along this dimension we dierentiate between qualitative infor-mal, [statistical, and exhaustive . . . ] validation methods. [...]

System view: This dimension refers to the perspective which characterizes a validation technique. [...]

23For a detailed review of these sources see Page and Kreutzer (2005, p. 210). Since validation is closely related to analysis, the scheme shown in Figure 2.16 strongly resembles the classication of analysis techniques in Section 2.4.2.

Approach

Conceptual model validation Phase in model building

cycle

Degree of formality

System view / perspective

Computer model verification

Operational validation Confirmatory

Exploratory Qualitative

(informal)

Statistical (formal)

Exhaustive (formal)

Static

Dynamic

Black box

White box Validation

techniques

Figure 2.16.: A classication scheme for validation techniques in simulation (adopted with mod-ications from Page and Kreutzer, 2005, p. 211)

Note that in (Page and Kreutzer, 2005, p. 211), we originally stated the same 'degrees of formality' as in the classication of analysis techniques presented in Section 2.4.2. However, in the context of this thesis, a distinction between statistical and exhaustive techniques seems more appropriate to cover the range of validation techniques treated. Besides statistical techniques for log and output analysis, we can also apply exhaustive formal verication techniques to simplied versions of a simulation model. In either case, both symbolic and numeric analysis techniques might be used.

This chapter provides an introduction to concepts, modeling techniques, and tools for multi-agent systems (MAS) and multi-multi-agent-based simulation (MABS). The structure and content of the presentation is largely based on Klügl (2000, Chs. 2,3,4). Several sections were adopted from Page and Kreutzer (2005, Ch. 11), co-written by and partly based on the diploma thesis (Knaak, 2002) of the author.