• Keine Ergebnisse gefunden

3. The density matrix renormalization group (DMRG) 29

3.7. Dynamical DMRG

However, compared to standard DMRG, the numerical complexity of these tech-niques is much larger. Recently developed techtech-niques such as correlator product states (CPS) in combination with variational Monte Carlo seem to be a promising alternative [131, 132].

The question remains how to map a system to a one-dimensional chain (i.e., how to number the spins) to get the best results with the DMRG method. Long-range interactions seem to decrease the accuracy of DMRG so that it is reasonable to number the spins such that long-range interactions are minimized as far as possible.

We come back to this question in chapter 4. Furthermore, it is not clear how to grow the system during the warm-up phase (cf. Fig.3.3). In our implementation, we grow the system from both ends and treat the spin interactions according to the coupling matrixJij with the exception that we always explicitly connect both blocks with an antiferromagnetic interaction of strength one. Otherwise, we would have for, e.g., a one-dimensional system with only nearest-neighbor interactions two isolated blocks in the warm-up phase, and the superblock would not resemble the “final” system.

3.7. Dynamical DMRG

It is also possible to calculate zero-temperature dynamical correlation functions us-ing extensions of the DMRG technique. There already exists an extension of the Lanczos algorithm to calculate dynamical correlation functions [133]. The idea of the Lanczos approach is to rewrite the dynamical correlation function in the form of a continued fraction. This technique can be built into the DMRG algorithm [134], but the accuracy is rather limited for many systems [7]. We have implemented and used the more accurate, but also more time-consuming dynamical DMRG (DDMRG) technique [7, 8, 135]. We do not distinguish between the DDMRG and the correc-tion vector DMRG (which is sometimes done in the literature, see [6, 8]) in this work, since the differences are very small and we use parts of both variants in our implementation.

In the following, we focus on the calculation of zero-temperature dynamical cor-relation functions such as

GA,B(ω+iη) =−1

πh0|A 1

E0+ω+iη−HB|0i, (3.51)

where |0i denotes the ground state with the energy E0. A and B are some tran-sition operators, e.g., the spin operators for inelastic neutron scattering (INS, see Sec. 2.3.2). For the comparison to a spectroscopic experimental method such as

INS, the important part of this function is the imaginary part ImGA,B(ω+iη) = 1

πhA| η

(E0+ω−H)22|Bi

= X

n

hA|nihn|Biδη(ω+E0−En), (3.52) where δη(x) = (η/π)/(x22) is the Lorentzian function with lim

η→0δη(x) = δ(x),5

|Ai ≡ A|0i, and |Bi ≡ B|0i. En denotes the energy eigenvalue belonging to the eigenstate |ni. Determining the matrix elements hA|ni and hn|Bi by directly cal-culating the excited states |ni using standard DMRG [136] is possible only for low energies or simple systems since all energy eigenstates up to the desired state have to be included as target states in the reduced density matrix. Using many target states, however, decreases the accuracy of a DMRG calculation [86]. Furthermore, states that do not contribute to the dynamical correlation function would possibly still be targeted. The DDMRG method solves these problems and requires only four target states.

The basic idea of the DDMRG method is a reformulation of equation (3.52) using the so-called correction vector, which is defined as [7]

|C(ω+iη)i= 1

E0+ω+iη−H|Bi. (3.53)

If one splits the correction vector into |C(ω+iη)i = |Cr(ω+iη)i+i|Ci(ω+iη)i with

|Ci(ω+iη)i= −η

(E0+ω−H)22|Bi (3.54)

and

|Cr(ω+iη)i= H−E0−ω

η |Ci(ω+iη)i, (3.55)

a direct calculation of ImGA,B(ω+iη) is possible as [8, 137]

ImGA,B(ω+iη) =−1

πhA|Ci(ω+iη)i. (3.56)

Within the reduced DMRG basis, |Ci(ω+iη)i is calculated as the solution of the linear equation system

[(E0+ω−H)22]|Ci(ω+iη)i=−η|Bi. (3.57)

5In practice, one always runs calculations with a finiteη, of course.

3.7. Dynamical DMRG

Solving this linear equation system is equivalent to minimizing the functional [8,138]

WB,η(ω, ψ) = hψ|[(E0+ω−H)22]|ψi

+ηhB|ψi+ηhψ|Bi. (3.58)

There are many possibilities for solving the linear equation system (3.57) or mini-mizing the functional, see, e.g., [139–141]. We have used a simple conjugate gradient (CG) algorithm without preconditioning as described in Refs. [103, 138] for the cal-culation of the correction vector. The CG method is an effective numerical method for solving symmetric positive linear equation systems. Like the Lanczos method, it also belongs to the Krylov subspace methods [102] and only needs the operation

|φi = [(E0+ω−H)22]|ψi (besides simple vector operations such as addition, etc.). Since we only have an approximation of H, but not of H2, in the current (reduced) superblock basis, we make the approximation [8]

[(E0+ω−H)2]eff ≈(E0+ω−Heff)2. (3.59) Here, the subscript “eff” means the representation of the operator in the reduced superblock basis. The construction of [(E0+ω −H)2]eff is numerically much too expensive for, e.g., Heisenberg systems with many spins. However, the squaring of a matrix leads to a slowdown of the convergence [6]. The convergence strongly depends on the condition number of the matrix (the representation of the operator on the left hand side of Eq. (3.57), to be more precise). The condition number (of a normal matrix) is defined as κ=|λmax|/|λmin|, where λmax is the eigenvalue of the matrix with largest and λmin the eigenvalue with smallest absolute value, respec-tively [142]. A smaller condition number usually leads to a better convergence of the algorithm, which means that squaring the matrix slows down the convergence.

Here, preconditioning techniques might help [102, 142]. We have not used precon-ditioning, but have employed the wave function transformation (see Sec. 3.4.5) for the correction vector to speed up the algorithm.6 A good guess for the solution of the linear equation system leads to less iterations of the CG algorithm that are needed for convergence. It is also clear that the convergence strongly depends on the broadening parameterη. The limitη → ∞would lead to a condition number of

6Here, a minor difficulty emerges: Let |00i be the state that is obtained as the transformation of the ground state from the previous DMRG step. This state is then used as a starting point for the Lanczos algorithm and the “correct” ground state|0i is obtained. Since the phase of the ground state does not matter, it might be that h00|0i ≈ −1, i.e., the phase is changed in the Lanczos calculation. If the imaginary part of the correction vector is also transformed, one has to take care of this phase change, since |0i and not |00i is used for the linear equation system that is solved. We deal with this problem by checking if the functional (3.58) is smaller for

|Ci0+iη)ior−|Ci0+iη)i, where|Ci0+iη)idenotes the transformed imaginary part of the correction vector from the previous DMRG step.

one and thus optimal convergence. Therefore, the larger the value of η, the better is the convergence of the CG algorithm.

The target states for the reduced density matrix are |0i, |Bi, |Ci(ω+iη)i, and

|Cr(ω+iη)i [7, 8]. Although not needed for the calculation of the imaginary part of the dynamical correlation function, |Cr(ω +iη)i is also included in order to minimize the error in the calculation of (3.59) [6, 8]. As in Ref. [140], we weight the ground state with 0.4 and each of the other states with 0.2 in the density matrix (cf.

Eq. (3.25)). However, only the ground state is normalized. We normalize the other states when we include them in the density matrix. Otherwise, the density matrix (3.25) would not have trace one.

If the correction vector has been calculated, the value of the dynamical correlation function for a specific frequency can be calculated using equation (3.56). IfA=B, we can also use the functional (3.58). Inserting |Ci(ω+iη)i leads to

WB,η(ω, Ci) =ηhB|Ci(ω+iη)i=−πηImGB,B(ω+iη). (3.60) Using the functional for the calculation of the dynamical correlation function should lead to a smaller error than using equation (3.56) if we have only approximately calculated the correction vector [8]. If the error in the calculation of the correction vector is , the error in the dynamical correlation function is also of order if Eq. (3.56) is used. In contrast, if Eq. (3.60) is used, the error is only of order 2 [8].

However, this statement is not strictly true if one uses the approximation (3.59).

For the systems that we have analyzed with DDMRG, we have found no significant differences between the two approaches or the differences did not matter. We have always calculated both values for dynamical correlation functions withA=B.

Summarizing, the DDMRG method consists of the following steps that are carried out for fixed ω and η within the (reduced) superblock space, i.e., in step 2 of the algorithm described in Sec. 3.3.2: The ground state|0iand the ground state energy E0 are calculated using the Lanczos method as described in Sec. 3.4.3. The state

|Bi is determined by applying the operator B to the ground state: |Bi = B|0i.

|Ci(ω+iη)i is calculated using the CG algorithm and|Cr(ω+iη)i is obtained via Eq. (3.55). Then, the reduced density matrix is formed using these four states.

It has to be noted here that the procedure has to be repeated separately for every frequency ω. For this reason, the DDMRG method is numerically very expensive.

Also, depending on the value ofω, sometimes more than 1000 CG steps are needed for convergence so that the calculation of the correction vector is usually numerically much more demanding than the calculation of the ground state. However, the complete independence of the calculations for different values of ω gives rise to a trivial parallelization of the program. Therefore, DDMRG is very well suited for high-performance computers.