• Keine Ergebnisse gefunden

Monte-Carlo-Methods

Im Dokument  (Seite 50-55)

2.2.1 Statistical Physics

In classical physics problems with limited particle number can be exactly de-scribed by the Newton equations. If one knows all physical variables, which determine the system at a defined timet0, the state of the system can be forseen exactly for all later times t.

For complicated many-particle-systems this is not possible any more; thus statistic variables are used. The analysis of the mean values then makes it possible to say something about the macroscopic behaviour of the system. Because of the great number of configurations in optimisation problems, physical optimisation also uses methods of statistical physics. In this context the observed systems generally can be described as canonical ensembles. These are closed systems which are in thermal contact with a heat bath and energy, but no particles can be exchanged. The system is in a thermal equilibrium, when the temperature T of the system is equal to the temperature of the heat bath. If that is the case, the probability distribution of any state σ can be described by the Boltzmann distribution[No02]. kB is the Boltzmann constant and Z the state sum, which plays a major role in statistical physics and is a scale factor for the calculation of many variables. The partition factor is given by:

Z =X

σΓ

exp(−βH(σ)) (2.12)

with H as Hamilton-function and β = k1

BT. The mean value or thermal ex-pectation value in a discrete system with an observable A can be calculated as follows: ForA=H one gets the expectation value of the Hamiltonian, which can also be

expressed through the logarithmic derivation of the partition function:

From that the heat capacity C(T) can be derived:

C(T) = dhHi

Because of the relationship between the heat capacity and the variance V ar(H), this variable is significant for simulation: the observation ofC(T) shows at which temperature the greatest changes occur. The system must be in a thermal equilib-rium at any temperature; otherwise the Boltzmann-distribution cannot be used.

The equilibrium needs time to be set; that has to be considered in simulation.

In statistical physics systems in a thermal equilibrium are analysed numerically with Monte-Carlo-methods. This methods are algorithms which use random numbers to calculate mean values in statistical systems. But how can theoreti-cally derived observables be calculated in practice? For an exact calculation all possible states of the systems must be considered. But in practice it is difficult to do this. Therefore the thermal expectation values are determined by a limited number of configurations. In order to get close to realistic values, two methods have been developed: simple sampling and importance sampling.

2.2.2 Simple Sampling

The basic idea of simple sampling [BH02] is to replace the exact equations for the thermal expectation values through a sum, which does not consider all possible configurations σ1, . . . , σG. Instead, a statistical selection of characteristic points σ1, . . . , σM, M ≤ G is taken from the phase space. The expectation value of an

observable is:

The pointsσi are randomly selected from the whole phase space. For the extreme case the equation:

MlimG

A¯(σ) = hA(σ)i (2.17)

holds. The method is called simple sampling, because every configuration is determined with uniformly distributed random numbers. In practice this method only shows good results for small systems or at very high temperatures, because each configuration is selected with the same probability.

But the distribution function of a macroscopic variable is strongly centered around its expectation value. Therefore only a small area of the phase space contributes significantly to the thermal expectation value of an observable. The distribution function PT(E) of the observable E shows for the temperature T a peak at ET with a full width at half maximum proportional to 1N. There N is the number of freedom degrees. Besides the scope of critical temperatures the distribution has the form: With a decreasing temperature, ET goes down and the distribution changes.

But simple sampling chooses points of the phase space, which are common for the distribution at P(E) and not for lower temperatures. The left curve in Figure 2.7 describes the energy distribution of a canonic ensemble at low temperatures.

The right curve shows the distribution, which is produced by simple sampling and is the equivalent to an infinite high temperature with hHi= 0. The distribution PT(E) is very small for low energies because of the exponential decrease. Thus simple sampling mostly produces physically unimportant configurations at low temperatures. What follows is a wrong calculation of the physical variables.

These disadvantages can be prevented by ”importance sampling” of Metropolis.

2.2.3 Importance Sampling

Just like simple sampling, importance sampling takes a selectionσ1, . . . , σM of all possible states σ1, . . . , σG. The points σ1, . . . , σM are not selected with the same probability, but with a special distribution P(σi). It follows for the observables:

A¯=

Figure 2.7: Probability distribution of the energy E

Thus the expectation value of the observable A(σ) shall be equivalent to the arithmetic mean. Metropolis et al. demanded that produced states σi which follow one another shouldn’t be dependent. A state σi+1 shall be produced with an adequate probability W(σi → σi+1), which depends on the previous state.

That is called Markov process. The transition probability shall be chosen in such a way that the distribution functionP(σi) is equal toPequ(σ) in the limiting case M → G. An important but not necessary condition is the principle of detailed balance:

Pequi)W(σi →σi0) =Pequi0)W(σi0 →σi) (2.20) If one puts Equation 2.11 in Equ. 2.20 and shifts around, it can be seen that the transition probability only depends on the energy change ∆H =H(σi0)− H(σi).

W(σi →σi0)

W(σi0 →σi) = exp µ

−∆H kBT

(2.21) The transistion probabilityW(σi →σi0) is not fully determined by this equation.

Normally it is chosen:

W(σi →σi0) = 1 2

·

1−tanh

µ ∆H 2kBT

¶¸

=

exp³

kBHT´ 1 + exp³

kBHT´ (2.22)

Or alternatively:

W(σi →σi0) =

( exp³

kBHT´

: for ∆H>0

1 : else (2.23)

Equation 2.22 is the so calledGlauber functionand Equ. 2.23 the Metropo-lis function. With this transistion probabilities a sequence of statesσi →σi0 → σi00 is produced. What remains to do is to show that the probability distribution P(σi) converges to Pequi). This can be shown with the central limit theo-rem of probability theory; the complete proof can be found in the corresponding literature.

Simulation of the ±J-Model

In the following an explanation shall be given, how the ±J- model can be sim-ulated with the single-spin-flip algorithm. For this a lattice of the size L×L×L with periodical constraints shall be given. Every lattice point is occupied with one spin si; the start configuration is random. The interaction Jij between adja-cent spins is randomly chosen with +J or −J and remains constant during the simulation. The next steps are shown in Table 2.3.

1. Selection of a lattice point i with si. 2. Calculation of the energy change,

when the spin turns from si to -si.

3. Calculation of the transition probability W for this spinflip.

4. Selection of a random number Z between zero and one 5. Spinflip for Z<W; no spinflip for Z≥W.

6. Calculation of the interesting variables:

energy, heat capacity, magnetisation, susceptibility.

Table 2.3: Simulation of the ±J-Model

Configurations which follow one another just differ in one spin; thus the physi-cal properties are highly correlated. Moreover the physi-calculation time of the thermal expectation values is very extensive. Therefore the expectation values should only be calculated from time to time. The physical interpretation is that at the begin-ning there is no thermal equilibrium in the system and many new configurations have to be produced before measuring the single variables.

Im Dokument  (Seite 50-55)