• Keine Ergebnisse gefunden

chanics [70, 71]. The central quantity in this formalism is the Hamilton function, or Hamiltonian, which is written as

H =T(P) +V(R), (3.1)

whereT represents the kinetic energy and V the potential energy of the system. Relating it to the original formulation of Lagrangian mechanics, the Hamiltonian is obtained from the Lagrange function L = T −V via Legendre transformation. Based on the principle of stationary action the Hamilton equations can be derived as

˙

pi =−∂H

∂ri , r˙i = ∂H

∂pi . (3.2)

When writing the kinetic energy as T =

ip2i/(2mi), Newton’s second law

mii =−∇iV =fi, (3.3)

is retrieved, where fi denotes the force acting on particle i. Hence, assuming the knowl-edge of the potential energy V(R) for each point in configuration space and the initial conditions {R(t = 0);P(t = 0)}, one can in principle integrate Eq. 3.3 and, in com-bination with Newton’s first law, obtain the classical motion of the system for all times [63]:

P(t) = t

0

dt∇V(R(t)) +P(0) (3.4)

R(t) = t

0

dt 1

mP(t) +R(0) (3.5)

should converge to the exact integrals. However, the order of convergence is an important factor, as it determines the accuracy of the integration and therefore the efficiency of the simulation.

To propagate the system from a known state at timetto the next step att+δta truncated Taylor expansion can be applied:

R(t+δt) = R(t) +R(t)δt˙ + 1

2R(t)δt¨ 2+O(δt3) (3.6) As the time derivatives are only known up to the second order, the error for this scheme is O(δt3).

A more efficient and thus more popular method is thevelocity-verlet integration algorithm [182, 163]. In contrast to Eq. 3.6 the velocities are evaluated in the middle of each timestep, t+δt/2 based on the average force associated with the embracing positions t and t+δt. Its mathematical formulation writes as

R(t+δt) = R(t) +V(t)δt+ 1 2

F(t)

M δt2 (3.7)

V(t+δt) =V(t) + F(t) +F(t+δt)

2M δt , (3.8)

where V = R˙ stands for the array of particle velocities. The iterative application of these equations yields the trajectory of each particle and propagates the system in time.

Compared to the truncated Taylor expansion it can be shown that the convergence of the velocity verlet algorithm is O(δt4). It offers enhanced stability and it is time-reversible.

As the total energy of the system is conserved, the velocity verlet algorithm resembles a microcanonical (NVE) ensemble.

3.2.2 Thermostats

Instead of considering a microcanonical ensemble one can as well simulate a system at constant temperature which is useful in order to mimic an experimental setup. In exper-iments the temperature is maintained by keeping the system in equilibrium with a heat bath. Although its macroscopic size usually forbids the inclusion of an atomistic heat bath in MD simulations, yet, several strategies have been devised to control the system temperature. When doing so, one has to make sure that the distribution of energies corresponds to a canonical ensemble.

3.2.3 Langevin Thermostat

A widely used approach is to couple the system to a continuous heat bath [179]. This can be accomplished by augmenting the equations of motion 3.3 by a friction force and a fluctuating stochastic force as if the system was immersed into a dissipative medium. By doing so one obtains the Langevin equation

mii(t) = fi(t)−γr˙i(t) +Γi(t). (3.9) The damping constantγis related to the magnitude of the random force via the fluctuation dissipation theorem

Γi(t)·Γj(t)= 2γkBT δi,jδ(t−t). (3.10) Obeying this relation assures that the energy which is additionally introduced into the system by the random force, and the energy which is removed by dissipation, in total balance each other in order to maintain a constant temperature. An essential effect of this method is, that it couples every particle individually to the heat bath, as the forces act locally on each particle. The temperature is uniformly distributed over the system, avoiding temperature gradients. On the contrary, due to the random force this approach no longer yields time-reversibility.

In the limit of large frictionγ → ∞, inertial effects vanish completely. Instead of Langevin dynamics the system obeys Brownian dynamics in that case.

3.2.4 Berendsen Thermostat

A way to control the temperature without introducing a random force, is offered by the Berendsen thermostat. In the Berendsen approach it is assumed that the temperature development of the system obeys the following equation:

dT

dt = T0−T(t) τT

, (3.11)

introducing a temperature coupling time constant τT and a target temperature T0. This scheme is commonly implemented into the velocity verlet algorithm by scaling all velocities after the integration by a uniform scaling factor

ξ =

1 + δt τT

T0

T(t+δt)−2 1/2

, (3.12)

where T(t+δt) = 2Ekin(t+δt)/(f kB) is the new temperature andf denotes the number of degrees of freedom of the system. Hence, the system temperature approaches the target temperature exponentially which, under normal circumstances, keeps the system rather stable even for small time constants.

Compared to the Langevin thermostat the temperature coupling is less tight, as it acts uniformly on all particles and therefore does not disturb the local dynamics of the sys-tem. Since the distribution of the velocities differs slightly from the Maxwell-Boltzmann distribution, it does not maintain an exact canonical ensemble, yet, the deviations are usually negligible.

3.2.5 Nos´ e-Hoover Thermostat

An exact NVT ensemble can be achieved by employing a Nos´e-Hoover-thermostat [81]. In this approach the scale factor ξ(t) is introduced into the equations of motion as a further equivalent degree of freedom of the system:

R(t) =¨ F(t)

M −ξ(t)R(t)˙ (3.13)

dξ(t)

dt = T(t)−T0

Q . (3.14)

Q=T0τT2 includes the coupling time constant and can be interpreted as a mass ofξ. The integration of Eq. 3.14 is embedded into the velocity verlet scheme in which the velocities are scaled twice during one time step. As the massQintroduces an inertial effect into the temperature dynamics, the Nos´e-Hoover method has the shortcoming that particularly for small time constants the temperature can exhibit large oscillations. Therefore the system can turn out less stable in some cases, compared the Berendsen ensemble.

Apart from these methods, several other approaches have been devised to control the temperature of the system, such as the Andersen thermostat [7] or stochastic velocity rescaling [29], which might be suitable under certain conditions.

3.2.6 Barostats

Sometimes it is desired to couple the system not only to a heat bath but also to control the pressure in the system allowing for changes of the simulation cell size. The simulated system then corresponds to an NPT ensemble.

Pressure coupling is generally accomplished in a way that is similar to temperature cou-pling:

The pressure tensor or, for isotropic systems, the scalar pressure is evaluated and the cell size (including its interior, i.e. the particle coordinates) is scaled by a factor which is determined similarly to the temperature scale factor from the difference between the current pressure P(t) and the target pressure P0. For anisotropic systems the scale factor η is a tensor itself which acts on the cell vectors as Lα =η·Lα. This changes not only the size of the cell but also its shape. It is furthermore possible to include only selected components of the pressure tensor in the coupling, e.g. for surface-liquid interfaces it is sometimes useful to control the pressure only along the direction of the surface normal, while the lateral dimensions of the cell which are determined by the crystal structure of the substrate remain constant.

Two of the most common schemes are the Berendsen barostat and the Nos´e-Hoover baro-stat. In both approaches the scale factor is determined in analogy to the respective thermostat. Accordingly, the Berendsen barostat offers better stability compared to the Nos´e-Hoover barostat, as it does not cause pressure oscillations. On the contrary, only the Nos´e-Hoover scheme yields an accurate NPT ensemble.

3.2.7 Periodic Boundary Conditions

The number of particles that can be treated in molecular dynamics ranges from a few atoms to about a million particles, depending on how elaborate the computation of the interactions is. Even at the upper boundary of this range, the corresponding system size is usually still small compared to macroscopic systems which means that artificial boundary effects can contribute considerably to the systems’ properties. A practical way to deal with this problem is to apply periodic boundary conditions. This means that images of the system are copied and translated by all possible linear combinations of the cell vectors. These images interact with the original particles which mimics an infinitely extended macroscopic system. This way artificial boundaries, such as walls, imposed to contain the particles in the cell are avoided. Moreover, since the interactions are in most cases short-ranged it is enough to consider only the first “shell” of images.

The calculation of short-ranged interactions obeys the minimum image convention which means one always takes into account only the smallest possible distance between two particles or between its periodic images. In order to calculate long-ranged interactions,

e.g. electrostatic interactions, strategies have been developed (cf. Sec. 3.3.2). Of course, the concept of periodic boundary conditions does not completely prevent finite-size effects, as the periodicity of the repeated images might still influence the system’s behavior, in particular for small cells. Thus, one is always advised to test the convergence of certain properties with respect to the system size to exclude significant artefacts.