• Keine Ergebnisse gefunden

Defining and Performing a Simulation Experiment

3.4 The SALMA Simulation Framework

3.4.4 Defining and Performing a Simulation Experiment

defined using the modeling languages of the SALMA approach and how the model can be turned into a concrete scenario by initializing values and choos-ing probability distributions. In order to use this model within a simulation experiment, some additional decisions have to be made, for instance:

1. How often should the simulation be repeated, i.e. how many simulation runs are performed?

2. How long should each simulation run be followed? What criteria for cancellation exist?

3. How should parameters of the model be varied between the simulation runs.

defcreate_initial_situation(self):

coordinator1, = self.world.get_entities_by_id("coordinator1") robots = self.world.getDomain("robot")

items = self.world.getDomain("item")

workstations = self.world.getDomain("workstation") forrinrobots:

r.xpos = np.random.randint(1, GRID_WIDTH) r.ypos = np.random.randint(1, GRID_HEIGHT) r.vx = 0

r.vy = 0

r.broken = False r.next_task = None r.robot_radius = 1 foriinitems:

r.set_carrying(i, False) coordinator1.request_queue = []

foriteminitems:

item.xpos = np.random.randint(1, GRID_WIDTH) item.ypos = np.random.randint(1, GRID_HEIGHT) item.delivered_to = None

forwsinworkstations:

ws.stationX = np.random.randint(1, GRID_WIDTH) ws.stationY = np.random.randint(1, GRID_HEIGHT) ws.delivered_item_count = 0

Figure 3.18: Creation of the initial situation for the delivery robots experiment.

4. Which information should be recorded in order to create an adequate data base for later analysis?

In general, making these choices is part of a process that is often referred to assimulation experiment design, which is a wide field by itself and can therefore only be touched very superficially in this thesis. Often cited introductions to this topic can be found in [San05] or [Law14, Chap. 12].

The first aspect to realize with respect to the issues mentioned above is that these decisions have to be made not only for one individual simulation run but for a series of simulation runs. This means that the framework needs a way to control the entire life cycle of a simulation run, including initialization, re-set, cleanup, and data logging. The part of the SALMA simulation framework that is responsible for this is shown in Figure 3.19. A simulation experiment is

defined by creating a subclass ofExperiment. There, a protocol for the initial-ization of a simulation run is established by means of severaltemplate methods that can be implemented within the user-defined subclass (see [GHJV94]). Pri-marily, this includes the methods create_entities, setup_distributions, and create_initial_situation, for which examples were shown throughout this section. The structure of the initialization procedure itself can be seen in Figure 3.22a.

+run_trials( e : Experiment, num : int, max_steps : int ) : ExperimentResult [*]

...

ExperimentRunner

+run_trials( e : Experiment, num : int, max_steps : int ) : ExperimentResult [*]

...

SingleProcessExperimentRunner

+run( max_steps : int ) : ExperimentResult +initialize()

+reset()

«template»#setup_distributions()

«template»#create_entities()

«template»#create_initial_situation()

«template»#before_run()

«template»#after_run() Experiment

+step( info : StepInfo ) : Verdict StepListener

#setup_distributions()

#create_entities()

#create_initial_situation()

#before_run()

#after_run()

DeliveryRobotsExperiment -instance : World

+instance() : World +create_new_world() +load_declarations() +initialize() -init_sort_hierarchy() +step() : StepInfo

...

«singleton»

World

+world_finished : boolean StepInfo

+action_name : String +arguments [*]

ActionExecution

UNDETERMINED NOT_OK CANCEL OK

«enumeration»

Verdict +verdict : Verdict +steps : int +worldTime : int

...

ExperimentResult 1

+failed

-step_listeners

* +performed

Figure 3.19: Structure of the SALMA simulation experiment framework.

After the initialization sequence, a simulation experiment can be executed using the methodExperiment.run(). The simulation would then proceed un-til either the world has finished, i.e. there is no process left that might be executed, or when a time limit is reached. Additionally, an experiment can be equipped with one or moreStepListeners, which, although represented as interfaces in Figure 3.19, are really callback functions that are executed after each simulation step. Every step listener function receives arguments that in-clude a reference to the World instance and a collection of details about the current step. In particular, it receives two lists that contain the actions (in-cluding events) that were performed in this step and those that failed because their preconditions were not satisfied. One typical use case for a step listener is to write part of the state and action information to a log-file or a database that can later be used for analysis of the experiment results. An example for such a logging handler that is used in the delivery robots example can be found in Figure 3.20. There, a step listener is created as a closure that is bound to a file object [Pyt15a, 16.2] that references a CSV file to which the positions of all robots are written (together with other data that is omitted here).

Besides for data logging, step listeners can be used to define stop

condi-defcreate_step_logger(f: TextIOBase):

def__l(world: World, step=None, **kwargs):

positions = []

. . .

robots = sorted(world.getDomain("robot")) forrobinrobots:

positions.append((rob.xpos, rob.ypos)) . . .

columns = [step, world.time]

forpinpositions:

columns.extend(p) . . .

f.write(";".join(list(map(str, columns))) + "\ n") f.flush()

return__l . . .

experiment = DeliveryRobotsExample(experiment_path) experiment.initialize()

withexperiment_path.joinpath("experiment.csv").open("w")asf:

f.write(create_csv_header() + "\ n") f.flush()

experiment.step_listeners.append(create_step_logger(f)) experiment.run(max_steps=3000)

Figure 3.20: Use of a step listener for data logging in the delivery robots experiment.

tions for simulation runs. In fact, a simulation run is stopped when a step listener returns one of the verdicts OK or NOT_OK, declaring the run either as success or as failure. This can be an important measure to avoid simulations from getting stuck in a state in which no further valuable progress is possible.

For instance, in the delivery robots example, it might happen that all robots are broken and thus unable to move, or that all items have been delivered. In both cases, it obviously does not make sense to continue the simulation. There-fore, the two additional step listeners shown in Figure 3.21 are installed. The first,break_when_all_delivered, returns a positive verdict, when the fluent delivered_tois set for all items. On the other hand,break_when_all_broken returns a negative verdict when the fluentbroken is true for all robots. Both returnNonewhen their conditions are not met to indicate that the simulation should continue.

When a simulation experiment involves any kind of statistical analysis, a single simulation run is not enough for any meaningful analysis. Instead, a batch of simulation runs has to be performed to gather a sufficient amount of data. For this purpose, the SALMA framework defines the interface

defbreak_when_all_delivered(world: World, **kwargs):

foriinworld.getDomain("item"):

ifi.delivered_toisNone:

returnNone returnOK

defbreak_when_all_broken(world: World, **kwargs):

forrinworld.getDomain("robot"):

ifr.brokenisFalse:

returnNone returnNOT_OK

Figure 3.21: Use of step listeners to establish simulation stop conditions in the delivery robots experiment.

ExperimentRunner with the method run_trials() that can be called with a number of simulation runs (trials) that should be performed. The execution then enters a nested loop that is sketched in Figure 3.22b. It can be seen that the initialization procedure from Figure 3.22a is executed at the begin-ning of every simulation run. As explained above, this resets the world state by recreating all entities, restoring the event and action configurations, and constructing an initial situation for the next run. Then, the Experimentis ex-ecuted via itsrun() method, which triggers the hook functionbefore_run() that can, for example, be used to initialize auxiliary data structures or re-sources like log-files. Then, the inner loop is entered, which keeps executing the main step function of the simulation algorithm,World.step()until a) the simulation indicates a finish of the world’s processes (represented by the flag stepInfo.world_finished being true); b) the maximum number of steps has been reached; c) the simulation run was stopped because a verdict has been found by a step listener, or d) an action has failed, i.e. it has been performed by an agent although its precondition was not satisfied. Finally, when the inner loop is left, the methodafter_run() is called to perform any necessary post-processing, e.g. saving files to disc, and the results of the simulation run are appended to the overall result collection which will eventually be returned when run_trials() is exited.

Although the abstracted interaction in Figure 3.22 refers to the general interface ExperimentRunner, it is actually a representation of the execution schema realized in the class SingleProcessExperimentRunner. At the mo-ment, this class that performs all simulation runs sequentially within one Python process is the only implementation included in the SALMA framework.

However, with the recent development in the fields of parallel and distributed computing, cloud computing, and various emerging “Big Data” technologies,

interaction initialization[ initialization ]

: Experiment

«singleton»

: World initialize()

1: 2: create_new_world

3: ()

initialize() 5:

create_initial_situation() 8:

create_entities() 4:

setup_distributions() 7:

init_sort_hierarchy() 6:

(a) Simulation run initial-ization sequence.

interaction running trials[ running trials ]

: ExperimentRunner : Experiment «singleton» : StepListener

: World

[i=1; i<=num; i++]

[stepInfo.world_finished ==

false

& step <= max_steps

& verdict == UNDETERMINED

& failed_actions = {}]

loop loop

run_trials(e, num, max_steps)

run(max_steps):"results[i]"

reset()

verdict before_run()

step():"stepInfo"

initialize()

step(stepInfo):"verdict"

results[i]

after_run() stepInfo

(b) Execution of a batch of simulation runs.

Figure 3.22: Interactions during simulation initialization and execution.

it has become a relatively straightforward task to realize experiment runners that are able to simultaneously perform many simulation runs in a cluster and aggregate the results. Since this kind ofhorizontal scalabilityis a key factor for the success of statistical model checking and simulation approaches in general, this topic will be revisited in the outlook that is given in Section 7.3 in the conclusion of this thesis.