• Keine Ergebnisse gefunden

11.1.1 Rationale

The aim of finite-time scaling analysis is to recover the critical point ρc and critical exponents β for the no-return probability 1−L(ρ) and γ for the average return time ¯τ from Monte Carlo simulations of finite-time tra-jectories. Furthermore, finite-time scaling analysis produces estimates of the exponents ν of the temporal coherence scale and τ of the return time dis-tribution. This is achieved by collapsing the recorded data for the relevant quantities onto a single master curve, respectively. To find the right choice of values for the critical parameter and exponents is subject to an optimization algorithm. Such an algorithm tunes the data collapse according to a goal function quantifying the goodness-of-collapse.

11.1.2 The finite-time scaling ansatz

We have established that the exponents of the return time distribution determine the critical exponents at the temporal percolation transition, and vice versa. As a phase transition only occurs in an infinite system, numer-ical simulation in finite time cannot probe the transition directly. Here, we adapt the conventional remedy of finite-size scaling analysis to the temporal percolation setting. This finite-timescaling analysis numerically recovers the critical pointρc and the critical exponents.

Let T be the temporal extent of a dynamical system, i. e. the number of time steps computed in a simulation. LetAT(ρ) be a quantity that diverges as |ρ−ρc|−ζ in the critical region of the infinite system (T → ∞,ρ → ρc).

The finite-size scaling ansatz translates to [116–118]

AT(ρ) =Tζ/ν

T1/ν(ρ−ρc)

, (T →∞,ρ→ρc), (11.1)

with the dimensionless scaling function ˜fand the critical exponentνof the temporal coherence scale ξ(ρ) in the infinite system. The scaling function controls the finite-time effects.

A Monte Carlo study yields data aT,ρi at system size T and parameter ρ for each runi. LetaT denote the average over all runs. PlottingT−ζ/νaT againstT1/ν(ρ−ρc)should let numerical data collapse onto a single master curve ˜f(x). For this to happen, the critical values ρc,ζ,ν need to be cor-rect. These assumptions hold for T → ∞, with systematic errors at finite sizes. [116,117].

103

104 methods

For quantities that jump at the critical point ρc, such as the size P(ρ) of the largest cluster in temporal percolation, we have the finite-time scaling

PT(ρ) =P¯

T1/ν(ρ−ρc)

(11.2) with scaling function ¯P(x), as the critical exponentζof P(ρ) is zero. Hence, independent of system size, we have PTc) = P(0)¯ . The common intersec-tion point of the measured data curves yields an estimate of the thresholdρc. This estimate is unbiased with regards to the critical exponents, and “should be free” from systematic errors due to finite system size. [116]

11.1.3 Quality of finite-time data collapse

The finite-time scaling ansatz (11.1) quantifies how a statistic AT(ρ) ob-served in finite-time trajectories scales with time T and parameterρ accord-ing to a scalaccord-ing function ˜f, the critical parameter ρc, the critical exponent ζ of the quantity itself, and the critical exponent νof the temporal coherence scale. [116,117]

Finite-time scaling analysis takes numerical data aTij at system sizesTi and parameter values ρj. PlottingTi−ζ/νaTij againstTi1/ν(ρ−ρc)with the right choice of ρc,ν,ζ should let the data collapse onto a single curve. The single curve is the scaling function ˜f from the finite-time scaling ansatz. In the following, we present a measure by Houdayer and Hartmann [185] for the quality of the data collapse. Melchert [186] refers to some alternative measures, for example those in References [187, 188], and to some applica-tions of these measures in the literature.

Houdayer and Hartmann [185] refine a method proposed by Kawashima and Ito [189]. They define the quality as the reducedχ2 statistic

S= 1 N

X

i,j

(yij−Yij)2

dy2ij+dYij2 , (11.3)

where the valuesyij,dyijare the scaled observations and its standard errors at xij, and the values Yij,dYij are the estimated value of the master curve and its standard error atxij. The sum in the quality functionSonly involves terms for which the estimated valueYijof the master curve atxij is defined.

The number of such terms is N. The quality S is the mean square of the weighted deviations from the master curve. As we expect the individual deviations yij−Yij to be of the order of the individual error q

dy2ij+dYij2 for an optimal fit, the qualitySshould attain its minimumSmin at around1 and be much larger otherwise. [190]

Leti enumerate the system sizesTi, i = 1,. . .,k, and letj enumerate the parametersρj,j=1,. . .,nwithρ1 <· · ·< ρn. The scaled data are

yij=Ti−ζ/νaTij (11.4)

dyij=Ti−ζ/νdaTij (11.5)

xij=Ti1/νj−ρc). (11.6)

11.1 finite-time scaling analysis 105 The master curve itself depends on the scaled data. For a given i or Ti, we estimate the master curve at xij by the two respective data from all other system sizes which respectively enclose xij: for each i 6= j, let j0 be such that xi0j0 6 xij 6 xi0(j0+1), and select the points (xi0j0,yi0j0,dyi0j0), (xi0(j0+1),yi0(j0+1),dyi0(j0+1)). Do not select points for somei0if there is no suchj0. If there is no suchj0 for all i0, the master curve remains undefined atxij.

Given the selected point(xl,yl,dyl), the local approximation of the master curve is the linear fit

y=mx+b (11.7)

with weighted least squares. [191] The weights wl are the reciprocal vari-ances, wl = 1/dy2ij. The estimates and (co)variances of the slope m and interceptbare

11.1.4 A refinement of the quality function

In this Thesis, I further refine the quality function (11.3) to let the data for each system size have equal weight. The original sum involves only terms for which the master curve is defined. As the number of missing terms in gen-eral differs from system size to system size, the sum implicitly weights sys-tem sizes differently. This is unintended behavior, especially when it comes to scalings with less dense coverage of the critical region at large system sizes. where the number of system sizes is k (as before), and Ni is the number of terms for thei-th system size. By separately averaging over all available terms for each system size, and only then averaging over all system sizes, the contributions of each system size have equal weight in the final sum.

106 methods

11.1.5 Parameter estimation

Following Melchert [186], we employ the Nelder–Mead algorithm to min-imize the quality function and estimate the critical parameter value ρc and exponentsνandζfrom Monte Carlo data.

The Nelder–Mead algorithm attempts to minimize a goal function f : RnR of an unconstrained optimization problem. [192] As it only eval-uates function values, but no derivatives, the Nelder–Mead algorithm clas-sifies as a direct search method. [193] Although the method generally lacks rigorous convergence properties, [194,195] in practice the first few iterations often yield satisfactory results. [196] Typically, each iteration evaluates the goal function only once or twice, which is why the Nelder–Mead algorithm is comparatively fast if goal function evaluation is the computational bottle-neck. [196,197] Nelder and Mead [192] refined a simplex method by Spend-ley, Hext, and Himsworth [198]. A simplex is the generalization of triangles in R2 to ndimensions: in Rn, a simplex is the convex hull ofn+1vertices x0,. . .,xnRn. Starting with the initial simplex, the algorithm attempts to decrease the function values fi = f(xi) at the vertices by a sequence of ele-mentary transformations of the simplex along the local landscape. The algo-rithm succeeds when the simplex is sufficiently small (domain convergence), and/or when the function valuesfiare sufficiently close (function-value con-vergence). The algorithm fails when it did not succeed after a given number of iterations or function evaluations. See Singer and Nelder [196] and refer-ences therein for a complete description of the algorithm and the simplex transformations.

In order to estimate the uncertainties of the critical paramter valueρc and critical exponents ν and ζ, we employ the method suggested by Spendley, Hext, and Himsworth [198] and Nelder and Mead [192]. Fitting a quadratic surface to the vertices and the midpoints of the edges of the final simplex yiealds an estimate for the variance–covariance matrix. The errors are the square roots of the diagonal terms. [190]

The Nelder–Mead algorithm needs an initial guess for ρc,ν,ζ. This part of the analysis requires human oversight and intervention. Inspecting the data, we will have an idea of the approximate location of the critical point.

Given that the critical exponents of the percolation probability and the per-colation strength should be zero, we determineρc andνby performing the finite-time scaling analyses on these data first (assuming that ζ = 0). If the Nelder–Mead simplex gets stuck in a local minimum, it is recommended to simply restart the search, with slightly off, or considerably revised, initial values. [196]