• Keine Ergebnisse gefunden

3.3 Continuous-Time Hybridization Expansion Quantum Monte-Carlo Algorithm

3.3.2 The Sampling Procedure

3.3. Continuous-Time Hybridization Expansion Quantum Monte-Carlo Algorithm 59

With the definition ofgC, it is immediately clear from this expression, that g¯σC(τ) =

Z

000∆(τ0−τ00−τ) 1 wC

∂wC

∂Γσ¯00−τ0)

= Z

000∆(τ0−τ00−τ) det(Mσ¯({τlc¯σ, τlcσ¯}))∂det(Mσ¯−1({τc

¯ σ

l , τlc¯σ})

∂Γσ¯00−τ0). (3.90) Combining the rules for functional differentiation, i.e.

∂Γσi−τl)

∂Γσ¯00−τ00)=δ(τi−τl−τ000σ¯σ, (3.91) and matrix inversion

Mij =det( ˜M−1)(ji)

det(M−1) = 1 det(M−1)

∂det(M−1)

∂(M−1)ji , (3.92)

where we used the cofactors( ˜M−1)(ji)for the matrix inversion, which has a possible representation as the derivative of the determinant with respect to the corresponding matrix element(M−1)ji, we finally obtain the formula for the observables:

g¯σC(τ) = ∆(τicσ¯ −τc

¯ σ

j −τ)(M¯σ({τlc¯σ, τlc¯σ}))ji. (3.93) The delta-function in this formula must either be measured on a fine grid inτ-space, i.e. on a discretized time interval[0, β]or, as we will do, the observables have to be performed in another representation which is much more suitable for continuous, smooth functions (see chapter ...). The average occupation hnσifor theσ-component and the average double occupancyhnniare other observables, which can be directly measured in the Monte-Carlo sampling, which becomes clear when taking the derivatives

βhnσi= 1 Z

∂Z

∂µσ

and βhnni=−1 Z

∂Z

∂U. (3.94)

As above, this leads to the observables nσC = 1

βwC

∂wC

∂µσ

= τσocc

β and (nn)C =− 1 βwC

∂wC

∂U =τdouble

β , (3.95)

which can be accumulated without any effort during the Monte-Carlo sampling and are determined more accurately than the Green’s functions since they are static variables.

With all the necessary ingredients, namely the configurations, the weights and the observables, we can skip and jump to the next section and set up the Markov-Chain and determine the acceptance probabilities for the sampling process.

Figure 3.9: Illustration of four subsequent possible steps in the Markov-Chain sampling. First, in a), the chain starts with a configurationCconsisting of two segments for up and three segments for spin-down. In the next step of the Markov-Chain, b), a specific spin-down segment is removed. In step c), an anti-segment for the spin-up component is removed, leading to the merging of the two present segments into a single one, and finally, in d), a distinct spin-up segment is inserted again.

Suppose the current configuration C consists ofnσ segments, occupying theσ-state of the impurity.

Then with probabilityp = 0.5we pick a spin-componentσ = {↑,↓}to update. Possible updates are either to insert a new segment into a free time interval or to remove one of thenσ existing segments from the impurity. For the insertion, which will be chosen with probabilityp= 0.5, a timeτcσhas to be selected at which the creation operatorcσcσ)shall be inserted into the impurity. This time is chosen from the interval[0, β]with uniform distributed probability, i.e. p = β. After the timeτcσ has been determined, the algorithm has to check if the impurity is empty or occupied at this time. If the impurity is occupied, this move is rejected and the sampling then proceeds with the next step, otherwise when the impurity is empty, the lengthlmaxof the empty time interval from timeτcσ to the starting time of the next segment is determined. The annihilation operatorcσcσ)is then inserted at a uniformly chosen timeτcσ in the interval[τcσ, τcσ+lmax], i.e. with probabilityp=l

max. The proposal probability for the insertion of this segment is then determined by the product of the two probabilitiespprop(nσ →nσ+ 1) = βl2

max. On

the other hand, fornσ+ 1segments occupying the impurity, the probability to uniformly pick a certain segment to remove is simplypprop(nσ+ 1→nσ) =n1

σ+1. With this, the detailed balance condition states pacc(nσ→nσ+ 1)

pacc(nσ+ 1→nσ) = pprop(nσ+ 1→nσ) pprop(nσ→nσ+ 1)

wnσ+1

wnσ

= (nσ+ 1)

βlmax eµσδτσocc−U δτdouble

det(Mσ−1(nσ+ 1)) det(Mσ−1(nσ))

, (3.96)

where in the last step, the formula for the weights (3.84) was inserted and we introduced the quantities

δτσocc andδτdoublewhich are the change in the occupation time and double occupation time between the

two configurations. Note that we took the absolute value of the determinant ratio since we have shown previously that the prefactors(−1)scoming from the determinants cancel with the prefactors from the trace-factor, see (3.81). The Metropolis condition fixes these probabilities to

pacc(nσ→nσ+ 1) = min

1,(nσ+ 1)

βlmax eµσδτσocc−U δτdouble

det(Mσ−1(nσ+ 1)) det(Mσ−1(nσ))

, (3.97) pacc(nσ+ 1→nσ) = min

1, βlmax

(nσ+ 1)e−µσδτσocc+U δτdouble

det(Mσ−1(nσ)) det(Mσ−1(nσ+ 1))

. (3.98) By including updates that insert and remove segments, ergodicity has been obtained on a formal level since any configurationC = (n, n, ...)can be reached from any other configurationC˜= (˜n,˜n, ...)

9Please note that the insertion and removal of segments alone is not sufficient to obtain ergodicity, i.e. the insertion and removal of antisegments is required for the Markov chain and not a tool that simply improves the sampling procedure.

3.3. Continuous-Time Hybridization Expansion Quantum Monte-Carlo Algorithm 61

inN = n+n+ ˜n+ ˜n steps. However, it turns out that the insertion and removal of segments does not guarantee ergodicity in finite sampling processes. This fact can also be conjectured from the fact that a completely empty impurity and a completely filled impurity are not treated on equal footing with these kind of updates as it is very unlikely to insert a segment spanning the whole time interval[0, β].

This can be adjusted by considering a complementary set of updates, namely the insertion and removal of so-called anti-segments. While a segment is nothing but a time interval (τcσ, τcσ) for which the impurity is occupied, an anti-segment is just a contrary time interval(τcσ, τcσ)for which the impurity is unoccupied. To express this in the segment picture, the insertion of an anti-segment(˜τcσ,τ˜cσ)is nothing but the splitting of a single segment(τcσ, τcσ)into two distinct segments(τcσ,τ˜cσ) (˜τcσ, τcσ), while the removal of an anti-segment(˜τcσ,τ˜cσ)is the recombination of two subsequent segments into a single one. The update probabilities are then determined by the formulas (3.97) and (3.98) since the anti-segment moves follow completely the same procedure as the segment moves and the weights are identical10. As for the CT-AUX solver, further updates can also be implemented in the CT-HYB solver, which lead to a decrease of the auto-correlation times but are not required for ergodicity. A set of such updates would be the possibility to not only insert and remove (anti-) segments but also to allow for shifting segments in time or to shift only the starting- or end point of a segment in time. These updates are implemented straight-forwardly by using the detailed balance condition and are not derived here. However, in some situations a complete exchange of the segments for the two different components is required, especially when the system is approaching magnetically ordered phases [67].

Steps of the Sampling Process

The Monte-Carlo flow diagram for the CT-HYB solver is very similar to the diagram for the CT-AUX solver, Fig. 3.4. The steps are as follows:

1. Initialization: At the beginning of the sampling process one possible configuration must be prede-termined as the starting configuration to initialize the sampling procedure.

2. For the current configurationCa possible update has to be chosen, either the insertion/removal of a segment(τcσ, τcσ)or insertion/removal of an anti-segment(τcσ, τcσ). Where all four of those possibilities have the same probability to not influence the detailed balance formulas above.

3. A new configuration must be proposed, either with one segment(τcσ, τcσ)less or more (or one anti-segment less or more), depending on the previous step.

4. The acceptance probability for this new configuration is determined according to the Metropolis probabilities (3.97) and (3.98).

5. Either the new configuration is accepted and the old configuration replaced, or the new configura-tion is rejected, and the old configuraconfigura-tion stays unchanged (i.e. the new configuraconfigura-tion is discarded).

6. A measurement of the observables, i.e. thegC’s, is performed (these may be many observables).

7. Go back to step 2, until the specified number of iterations has been achieved.

Finally, after deriving the configurations, weights and observables in the previous chapter and setting up the sampling procedure, we will also compare the CT-HYB solver with exact diagonalization to check whether it is working correctly or not. In Fig. 3.10, we show a comparison between ED and CT-HYB Green’s functions for two distinct hybridization functions, determined by the bath energiesσand hy-bridization parametersVσ, which are chosen to be the same for both spin components. Together with these results, the performance of the CT-HYB algorithm can be discussed in the same way as the CT-AUX solver from Fig. 3.6, which shows the average perturbation order in dependence ofβtandβU. Since CT-HYB is an expansion in the hybridization, it does not scale withU but rather with the factorβt, which is

10In fact they are only identical after a rearrangement of the matrix M has been performed, since, in contrast to the segment insertion/removal, for the anti-segment insertion/removal the inserted/removed columns do not belong to the same segments.

Although this is a trivial modification of the matrixMafter the insertion/removal of an anti-segment it must not be forgotten or neglected since it totally changes the measurements.

Figure 3.10: Comparison of the interacting Green’s functiong(τ) =Gσ(τ)of an AIM with finite number of bath orbitalsS = 3obtained from a CT-HYB algorithm and the exact results from diagonalization of the Hamiltonian. The accuracy does not depend on the interaction parameterU, since CT-HYB is an expansion in the hybridization. The Green’s functions have been obtained for (left) bath energies σ = (1,−1,1)and hybridization parametersVσ = (1,−1,1)and (right) bath energiesσ = (3,2,1) and hybridization parametersVσ= (1,2,3). Although a very small number of sampling steps,N = 106, was used, the agreement is already very good and can be improved by taking the usual number of steps N= 108, which turned out to be optimal for most simulations that we performed.

the Hubbard hopping parameter and indirectly proportional to the hybridization.

In this chapter, we have found the basic formalism for the continuous-time Monte-Carlo impurity solvers CT-HYB and CT-AUX, for which we will discuss several extensions and improvements in the following chapter and which we will use later in combination with RDMFT to investigate the properties of fermionic lattice models with artificial gauge fields.

4. Extensions and Improvements to the CT-QMC Methods

The continuous-time Monte-Carlo impurity solvers introduced in the previous chapter, namely the CT-AUX and the CT-HYB solver, were so far only discussed for the case of a single impurity and with spin a good quantum number, i.e. without terms that mix the spins, as they would occur for systems which show superfluid ordering or spin-changing hopping parameters. In this chapter, we will discuss both, the extension of the CT-AUX and the CT-HYB solvers to systems with superfluid pairing and spin-changing hopping processes. We will also extend the CT-AUX solver to cluster impurities, containing more than just a single lattice site. Additionally, improvements of the measurements in the CT-HYB solver are discussed, which allow for a direct measurement of the self-energy [74] and which perform the measurements in a different basis compared to the previous chapter, i.e. the measurements will be performed in a polynomial basis instead of the discretized imaginary-time interval [16].