• Keine Ergebnisse gefunden

Elastic Properties of Colloidal Crystals

4.1 Monte-Carlo Simulations

The Monte-Carlo simulation method is based on the mathematical theory of Markov chains which are used to generate states with probabilities given by statistical mechanics. AMarkov chainis a sequence of random variablesX0,X1,X2, ...with the properties:

(a) The random variablesX0,X1,X2, ...assume values in a certain finite or denumerable infinite set, which is calledstate space.

(b) Each random variableXndepends on all previous random variables only through its imme-diate predecessorXn−1; that means all memory about the history of the system before being in stateXn−1is ignored.

A Markov chain is fully determined by the initial probabilityπ(0)j =P(X0=j)and by the transition probabilities

Pjk=P(Xn+1=k|Xn= j) (4.1)

linking the two random variablesXnandXn+1. This conditional probability (cf. section9.1.1) is assumed to bestationary,i.e.independent ofn. Due to the countability of the state spacePjkcan be written as a transition matrix. WheneverPjkfulfills the positivity condition and the normalization, where its rows add up to one,

Pjk≥0 and

k

Pjk=1, (4.2)

it is calledstochastic matrix. Such a matrix is the transition matrix of anirreducibleMarkov chain where the full state space is the only closed set. In other words this means that every state can be

reached from every other state.

The following definitions will be employed:

• A state jis calledergodicif the probability to return to state jis equal to one (which is the definition of apersistent state) and the mean waiting timehTji<∞.

• Thelimiting distribution1is defined as π˜k≡lim

n→∞Pjknπ(0)j . (4.3)

Theorem: Consider an aperiodic irreducible Markov chain with a stochastic matrix Pjk. All states are ergodic if, and only if, the linear system

πk=

j πjPjk,

k πk=1, (4.4)

possesses an invariant probability distributionπkas solution.

From definition (4.3) it is clear that ˜πk must satisfy the eigenvalue equation ˜πk=∑jπ˜jPjk with eigenvalue 1 and thus it is a solution to (4.4).

Theorem: Consider an aperiodic irreducible Markov chainX0,X1,X2, ...of ergodic states with invariant distribution ˜πk. For every bounded functiongon integers and for almost all (i.e.with probability one) realizations of the Markov chain, one has

k g(k)π˜k= lim

n→∞

1 n

n

l=1g(Xl). (4.5)

This theorem provides the mathematical foundation of the Monte-Carlo method, by which we want to calculate states corresponding to a given limiting probability distribution by means of Markov chains. Equation (4.5) tells us, that the arithmetic average of a sequence of simulation results generated by an underlying Markov chain reproduces the ensemble average performed with some given distribution ˜πk.

In equilibrium statistical mechanics one is interested in ensemble averages (typically in the canon-ical ensemble) for certain observablesA, which means the calculation of

hAiNVT=Z drNA(rNNVT(rN) = 1 QNVT

Z drNA(rN)e−βV(rN) (4.6)

whereρNVT(rN)is the canonical distribution which has the partition functionQNVTRdrNe−βV(rN) as normalization constant. This integral of (4.6) can be approximated as sum over possible points in position space

hAiNVT

k

A(rkNVT(rk)≡ hAirun. (4.7) Thus, a Markov chain having the canonical distribution as limiting distribution reproduces ac-cording to theorem (4.5) the canonical ensemble average. This leads to the question: Is there a procedure for constructing a Markov chain with a prescribed limiting distribution?

1The Perron-Frobenius theorem [Fel68] assures, that the limiting distribution ˜πk implied by the Markov chain is independent of the initial conditionπ(0)j .

For the purpose of illustration, assume a system with a finite number of states {1,2,...,M}. In the underlying statistical ensemble, each state jhas the given probabilityπj (=ρNVT(rj)for the canonical ensemble). We can calculate the averages by

hAi=

M where ˜πj is the limiting probability distribution of state jof the underlying Markov chain. The last expression of (4.8) is obtained by applying theorem (4.5) tog(j)πj/π˜j.

How shall we choose the limiting function ˜πj? For the choice ˜πjjthe simulation effort is rather focused on the states contributing most to the average value of interest. This choice expresses the idea of so-calledimportance samplingand leads to the success of Monte-Carlo simulations.2 Having decided (according to Eq. (4.4)) which limiting distribution of the Markov chain we want to have, brings us to the problem, that the transition matrixPjkis unknown. Considerable freedom is given in finding an appropriate transition matrixPjk, with the crucial constraint that the elements of the transition matrix should be independent of the normalization, i.e. the partition function QNVTof the limiting distribution. A useful trick in searching for a solution of (4.4) is to introduce the unnecessarily strong condition ofdetailed balanceormicroscopic reversibility3

πjPjkkPk j (4.9)

for all states j,k.

Metropoliset al.[Met53] introduced a widely used procedure for constructing a phase space tra-jectory in the canonical ensemble where the transition matrix satisfies the relations (4.4) and (4.9).4 The so-called Metropolis Monte-Carlo move is incorporated in the definition of the non-symmetric stochastic matrix

2This choice does not give the optimum variance reduction ofA(j)πj/π˜j. It would be achieved by choosing π˜j= A(j)πj

hAi ,

which implies that we already know the solution. Another naive choice would be ˜πj=1/Mwhich expresses the idea of so-calledsimple samplingbecause it treats all states equal in performing averages. Due to the exponential in the canonical distribution, this would imply a lot of unnecessary computation effort, because most states carry a exponentially suppressed weight and make only negligible contributions to the average.

3The condition (4.4) follows immediately from

j πjPjk(4.9)=

j πkPk j(4.2)= πk.

4There exist also other solutions to equations (4.4) and (4.9) as for example the symmetrical solution Pjk=

being referred to asBarker sampling. [All87]

In this solutionPmovejk is a symmetrical matrix which is often called the underlying matrix of the Markov chain. The symmetry ofPmovejk is necessary to satisfy the detailed balance.

The transition matrix (4.10) tells us, that for mutual transitions between the states jandk, one always accepts the transition from the less probable to the more probable state. The transition from a more probable to a less probable is not fully rejected, but it is accepted with a probability given by the ratio of the invariant probabilities. If a trial Monte-Carlo move from jtokis rejected, the state j is kept, and it is accounted for a further time in evaluating arithmetic averages. A fullMonte-Carlo stepis performed, when it has been tried to move every particle once. Finally, notice that the Metropolis method involves only ratios of the invariant probabilities πk and that it therefore is independent ofQNVT. It can be shown that the accuracy of the averages calculated scales with the simulation timeτrunas

hAiNVT =hAirun+O τrun−1/2

. (4.11)

By construction, the Monte-Carlo algorithm does not reproduce the real time evolution of the sys-tem like molecular dynamics (MD) simulations do. But Monte-Carlo simulations produce time-ordered configurations enabling us to speak ofMonte-Carlo time. One of the advantages of MC simulations is, that they can be readily adapted to the calculation of averages in any ensemble. For example, the so-called isothermal-isobaric (NPT) ensemble is more appropriate than the canonical ensemble for the identification of the variety of equilibrium system configurations consisting of two particle species, because the occurrence of phase separation is avoided.

All the results of this chapter depend on the irreducibility of the underlying Markov chain. Simula-tion bottlenecks occur when a path between two allowed system states is difficult to find. Ways to circumvent these bottlenecks, which are always a worry in MC simulations, are special MC moves (as particle exchange moves or cluster moves) or improved simulation methods (as for example parallel temperingortransition path sampling).

It is particularly simple (compared to MD simulations) and efficient to simulatesystems of hard particles (discs or spheres), which are not allowed to overlap. Here all MC moves which result in overlapping particles are rejected and all other moves are accepted. Due to this fact, plenty of simulations have been performed with such model systems of hard particles.

By construction, the accuracy of the Monte-Carlo method strongly depends on the quality of the random number generator (RNG) used, because it is responsible that a representative portion of phase space is sampled in a reasonable number of moves. Therefore, we performed simulations with various random number generators. Our first simulations used a self written modulo RNG using the congruential method. A sequence of random numbers is calculated from the 32-bit algorithm [Lan00]:

Xn= (16807·Xn−1)mod(231−1), (4.12) whereXi∈NandX0is an arbitrary starting seed. The results presented in this thesis are obtained using the Mersenne-Twister RNG of the GNU Scientific Library (GSL) [Gal06]. The program code supports all available RNGs of the GSL and the RNG of the Blitz++ library [Vel05].5 All simulation results are cross-checked with at least two different RNGs.

5 Selection of the RNG used in the simulation is done by definition of appropriate compiler variables during the compilation process. Available options are:

USE MY RANDOMfor the congruential RNG based on equation (4.12)