• Keine Ergebnisse gefunden

1.4 Immunotherapy of cancer

1.4.3 Mathematical modelling in life sciences

The immune system, like many other biological systems, is a very complicated network of many different cell and molecule types that interact in multiple ways to regulate the body’s response to disease. Modern experimental techniques make it possible to study many aspects of these kind of complex systems. However, the options to manipulate biological processes in living creatures or even full ecosystems are limited. Experiments under laboratory con-ditions, like in vitro experiments in cell cultures, often over-simplify and do not reflect the actual situation.

In these cases it is often helpful to employ mathematical modelling to further study the dy-namics of the system. In contrast to many experimental models, mathematical models and simulations are less time and money consuming and easily manipulable. Even though theor-etical models may not always be able to make quantitative predictions and can of course not replace clinical studies, they can still be beneficial. Through modelling, one can test hypo-theses, identify important mechanisms, i.e. those that determine the outcome of an experi-ment, and predict general trends for scenarios beyond the experiments. Thus, mathematical

1 Introduction

modelling can guide experimental research towards promising ideas, validate experimental results, and reduce the amount of necessary long-term and animal experiments.

The general approach of mathematical modelling is to start out from some input data, and most of the time some hypotheses on which players (cell types, molecules, organisms, etc.) are involved and how they interact. From this information, a model is built. It is validated by comparing simulation results, which are generated based on this model, with experimental data. Optimally, this validation data should be different to the input data. If the simulation and experimental results coincide, it is a strong indication, although not a proof, that the underlying hypothesis is true.

Depending on the pre-existing knowledge and amount and quality of available data, there are different approaches to building a suitable mathematical model. If the data is high in quantity and quality, it makes sense to apply statistical and computational methods to choose the best model out of a pool of hypotheses and determine the parameters to fit the experimental data. In [50] Costa and coauthors present a Bayesian approach for model selection and parameter estimation in tumour growth models. Recently, Fröhlich et al. have introduced a computational framework for the parameterisation of large-scale mechanistic models. They apply these methods to a large network of cancer related signalling pathways to predict responses to combined drug treatments [80]. Computational methods like this allow to integrate large sets of diverse data to investigate complex biological systems. Particularly in the parametrisation of large systems it is however important to conduct an uncertainly analysis. This determines the reliability of model predictions, accounting for various sources of uncertainty in model input and design.

Particularly if the available data is not that extensive, statistical methods can give a false confidence into the attained model and the predictions generated by simulations. In those cases, it is often useful to employ a more simplistic modelling approach. Here, the goal is to keep the number of different cell or molecule types and interactions or mechanisms that are incorporated in the model as low as possible. This produces low dimensional models that allow for a more theoretical analysis of the dynamics. Such an approach aims more towards a general understanding of the structure of the system than an exact quantitative fitting. It looks for common phenomena that arise across a wider set of parameter choices. In case the simulations do not fit the experimental data, one retains a basic understanding of causalities within the mathematical model and can suggest which mechanisms might be depicted in a wrong way.

As an example, Kuznetsoz and coauthors have proposed a simple ODE model for the in-teraction between tumour and immune cells and did parameter estimations [116]. A review on more of such models can be found in [67]. In [1], Altrock, Liu, and Michor review a number of stochastic and deterministic models for different aspects of cancer, like tumour initiation, progression, metastasis, and treatment resistance. Kimmel et al. consider a hybrid model that combines a deterministic ODE model with stochastic birth-and-death processes for small tumour cell populations to study CAR T-cell therapy of B-cell lymphomas [106].

On a more theoretical level, with no connection to specific data, Mayer and Bovier study the activation of T-Cells as a statistical test problem, using large deviation techniques [130].

In [64, 76], Foo, Leder, and coauthors investigate tumour genesis, where several mutations

1.4 Immunotherapy of cancer

have to be acquired to gain a fitness advantage, in a spatial setting similar to a voter model. Gunnarsson et al. consider multitype branching processes to study the stabilisation of reversible phenotypic switches that lead to drug-resistance, for various treatment approaches [89].

With the experiments on ACT therapy of melanoma in mouse models from Landsberg et al. and Glodde et al. [117, 87, 88], we are in the situation of relatively sparse data. The available measurements give information about the variation of the total number of cells (comprised of many different cell types) over time and the genetic composition of the tumour at inoculation and harvesting. Therefore, we employ a rather simple model that only involves the most important cell or molecule types and mechanisms. Among other reasons, the study of spontaneously occurring mutations makes it necessary to consider a stochastic model.

Since we consider evolutionary dynamics within a growing tumour tissue that has not yet reached an equilibrium size, the model should not be restricted to a fixed population size and depict the competitive interaction between different cell types. All of these reasons make an extension of the previously mentioned class of individual-based Markov processes a suitable choice of model. In this thesis we extend the model of Baar at al. [11] that is used to study the experiments in [117]. Parameter estimation for this model by an SAEM algorithm is proposed by Diabaté et al. in [58]. More details are given in Section 1.6 and Chapter 4.

There are a couple of different approaches to simulate sample paths of such generalised individual-based Markov processes, where a number of different events or reactions occur at exponential rates, depending on some parameters and the current state of the population.

A stochastic simulation algorithm to produce an exact realisation of such a Markov process was introduced by Gillespie in the context of chemical reactions [83]. There are various ways to improve this algorithm, e.g. by reducing the number of required random variables. For example the next reaction method [82] reuses random variables by rescaling, which is further improved by efficient binning of events in [160]. Both approaches are particularly suited for systems with many possible events with rates that only depend on few cell or molecule types. However, all of these algorithms separately generate single events. Particularly in large populations, where there are many frequently occurring events like proliferation and death, they are computationally heavy and need many iterations to simulate the evolution over a time span that is of interest.

To overcome this problem, there are different procedures to approximate the number of occurrences of a certain event within some time interval. The simplest one is to consider the corresponding deterministic system (according to the large population approximation as described above). This way, the mean dynamics of the system can be simulated with classical numerical techniques for solving differential equations, as for example Runge-Kutta methods. The shortcoming of this approach is that random effects are no longer included.

Therefore, the idea of hybrid algorithms is to combine deterministic and stochastic methods in some way to speed up simulation but also maintain stochastic fluctuations. To do so, either subpopulations are sorted by large and small population sizes or events are sorted by frequent and rare occurrence, i.e. the size of their event rate. The evolution of large populations or frequent events is handled deterministically, according to the mean dynamics, while small populations or rare events are treated stochastically. There are also approaches

1 Introduction

that introduce an intermediate step of stochastic diffusion approximations. Examples for approximative hybrid simulations of individual-based models can be found in [159, 126, 62].

Another approach is the one of so-called tau-leaping, where the number of events within a certain time interval is generated as a Poisson random variable [84]. The approximation in this case is that the rates of the different events are assumed to be constant for this time interval. Therefore, the length of the interval is dynamically chosen based on the sensitivity of the rates, i.e. on how much the rates vary with a changing population state. For linear, quadratic, and cubic rates, there is a good theory in place to choose the interval length [30].

However, general rates are not treated yet.

In the simulations in this thesis, we apply a hybrid algorithm that combines deterministic Runge-Kutta methods and a stochastic Gillespie algorithm, differentiating between frequent and rare events. This is justified by applying the large population approximation result from [70]. More details on this are given in Chapter 4.