• Keine Ergebnisse gefunden

6.3 Analysis

6.3.1 Signal to noise problem

When considering the definition in Eq. (6.11) one must make a choice concerning the parameterrcut which marks the limit in the summation of the topological charge density correlator. In principle, one could sum up to the maximum distance allowed by the size of the plateau, but in practice this produces undesired effects in the error.

As discussed in Ref. [94], one possible way is to model the large distance behaviour of the tail of the correlator with a phenomenological model which takes into account the exponential behaviour of the correlator. While this reduces the statistical error, it adds a systematic effect in the calculation. In this sense, if the estimation of the tail is well below the statistical uncertainty, it was argued in Ref. [79] that a more convenient choice is to cut the sum at a sufficiently large value of rcut. We follow the latter line of thought.

One way to choose rcutis to estimate the contribution from the tail of the corre-lator by fitting an exponential to the mass of the lightest pseudo scalar particle of the theory [79]. In our case, when dealing with the pure gauge theory, the large mass of approximately 2.6GeV [117] of the pseudoscalar glueball0−+ makes it impossible to identify the exponential decay of the correlator.

As we have mentioned, hq(0)q(r)i is negative at r > 0 and gets a positive con-tribution from the contact term at x = 0. Due to the smoothing at positive flow time, the correlation function is positive for small r and only becomes negative for

r &√

8t. At small values of t this is noticeable, but at t=t0, the smoothing of the flow hides this behaviour (see Figure 5.7). In Figure 6.1 we show the rcut depen-dence of χYM, where it becomes clear that the signal receives no contribution from

64 6.3. ANALYSIS the long tail of the correlator, and summing up more time slices simply increases the error. The red vertical line shows the value at which we cut the sum following the procedure described in the following.

1 2 3 4 5 6 7 8

0 5 10 15 20

6.20 6.40 6.60 6.80 7.00

2 4 6 8 10 12 14 16 18 20 104 t

2 0χYM

rcut/√ t0

A(5)3

Figure 6.1: Plot of the rcut dependence of χYM for the ensemble A(5)3. The band shows the value ofχYM if the sum is stopped at rcut= 7.0√

t0.

Our strategy is to use the algorithm introduced in Chapter 5. Ideally we would like to apply it to all the ensembles in order to have a good estimation of the correlation function at large values ofr. However, most of the ensembles in Table 6.1 were already generated by the time our multilevel algorithm was formulated. Our strategy was then to use anSU(3)ensemble with the same parameters as in Table 5.1, generated with the multilevel updates as described in the previous chapter and with a total ofn0×n1 = 184×280 = 201600measurements. Notice that this represents an order of magnitude larger than the ones used for our SU(N)study, and has the added advantage of the faster error scaling from the use of the multilevel algorithm.

We compare it to an ensemble with the same parameters but with n0 = 15600 measurements and no multilevel type updates, which is equivalent in statistics to the ensembles in Table 6.1.

Figure 6.2 shows a comparison of the correlator as a function of r. The red open points are computed using the standard algorithm while the black filled points correspond to the improved algorithm. Our strategy is to use this data to determine the right value of rcut. In order to do that, we first define

χimpYM,tail(r) = 2

Xmax

∆=r+1

P¯(∆), (6.13)

6.3. ANALYSIS 65 as the sum of the contributions to χtYM from values of∆> r.∆max is the maximum distance up to which the correlator can be summed up in our finite size lattice, i.e.

T −2d. The superscript “imp” has been written to explicitly show that it is to be computed with the high precision data from the improved multilevel algorithm.

-0.01 0.00 0.01 0.02 0.03

4 6 8 10 12 14 16 18

104t5/2 0h¯qt0(0qt0(r)i/L3

r/ t0

impstd

Figure 6.2: Plot of

¯

q(0) ¯q(r)

. The red points correspond to data computed using the standard algorithm and around 15k measurements, while the black points correspond to the observable computed using the multilevel algorithm andn0×n1 ≈200k measurements.

Notice that the real advantage of the multilevel is only visible at r/√

t0 >7.0. Figure from Ref. [98].

We then look at the condition

F(N)χimpYM,tail(r)|r=rcut < 0.25σstd,SU(N)(r)|r=rcut, F(N) = χstd,SU(N)(r)

χstd,SU(3)(r) , (6.14) where σstd,SU(N) is the statistical error computed for χYM for each of the SU(N) en-sembles, and χstd,SU(N)is also computed using the standard algorithm. The strategy is basically to attach the tail determined from the multilevel algorithm in SU(3) to the SU(N)ensembles in Table 6.1 and compare it to the statistical error. The factor F(N) is used to account for N dependence of the observable, but in practice, for r such that the condition in Eq. (6.14) is satisfied, the result does not depend on F(N).

As long as the error in the estimation of the tail itself is small compared to σ, imposing Eq. (6.14) guarantees a small systematic error compared to the statistical one. In fact, the values ofrcut obtained for theSU(N)ensembles are all below6√

t0. However, as can be seen in Figure 6.2, forr/√

t0 <7.0, we are still in the region where

66 6.3. ANALYSIS the multilevel updates do not yet yield the maximum theoretical improvement, and the errors are only a factor 2smaller than those of the standard algorithm. Because of this, only when r/√

t0 ≥ 7.0, we consider the criterion to be significant. Past this point, the errors rapidly become of O(10−1) smaller than those of the standard algorithm. Considering this, rcut = 7.0√

t0 is a safe choice as it satisfies Eq. (6.14) and is evaluated up to a very high precision compared to the standard correlator.