• Keine Ergebnisse gefunden

(i)

*

* *

a)

B2

B1 BN-1 BN

A2 A1

Pii0;jj0

AN-1 AN

b)

(ii) ,

*

* * *

¾n

Figure 2.7: MPS diagram for simple scalar product between states | A i and | B i encoded as MPS (panel a), leading to the plain Fock space contraction in panel (b) (vertical lines). The left- (right-) most horizontal refer to singleton spaces, hence may also be simply contracted for convenience, as indicated by the vertical dashed lines. By convention, connected lines are contracted, i.e. summed over, hence the leading sum in panel (b) is implicit, as emphasized by putting it in brackets. Explicit usage of the transfer matrices P

k

in evaluating the scalar product would scale like O (N · dD

4

) in cost, but this is not yet optimal.

55

Panel (c) indicates the actual way of calculating a scalar product of MPS states by sequential contraction of (i) B and (ii) A. The cost for each scales like O (dD

3

).

2.6 MPS Algebra

An MPS describes a potentially strongly-correlated quantum many-body-state in an expo-nentially large Hilbert space, as introduced in Eq. (2.3). While, in principle, the coefficient Q

n

A

n]

can be calculated for arbitrary but fixed (σ

1

, . . . , σ

N

), this would quickly become

exponentially prohibitive for the entire Hilbert space. In practice, however, this is never

required. An MPS is stored by its constituting A-tensors, while physical quantities such

as expectation values can be calculated efficiently by iterative means. This is based on a

generalized matrix-product structure of the underlying (quasi-) one-dimensional physical

Hamiltonian.

2. Matrix Product States

2.6.1 Simple example: Scalar Product

Consider the very basic example of the scalar product of two states encoded as MPS, h ψ

A

| ψ

B

i = X

σ1,...,σN

X

σ10,...,σ0N

Y

N

n=1

A

kn]∗

Y

N

n=1

B

kn0]

h σ

1

, . . . , σ

N

| σ

10

, . . . , σ

0N

i

= Y

N n=1

X

σn

A

nn]∗

⊗ B

nn]

| {z }

≡Pn

, (2.20)

with its MPS diagram shown in Fig. 2.7. Due to h σ

n

| σ

n0

i = δ

nn0

δ

σσ0

, the two MPS in panel (a) directly link vertically, generating the MPS diagram in panel (b), which exactly reflects the last line in Eq. (2.20). Given an open boundary, the left- (right-) most horizontal line connects to the one-dimensional vacuum state. Therefore this singleton index space may also be simply contracted, as indicated by the vertical dashed line. This way, there are no open lines left in the diagram, which allows full contraction to a number, i.e. the overlap of the two input states.

Consequently, the scalar product of two MPS has been reduced to roughly N multipli-cations of the D

2

× D

2

dimensional transfer matrices P

n

. While the multiplication of two P

n

would scale like O (D

6

), for open boundary conditions and iterative prescription, how-ever, this is essentially reduced to a matrix-vector multiplication in the space of transfer matrices, with overall cost O (N D

4

). The latter can still be further improved upon by not combining A

n

and B

n

into a single object P

n

, but rather dealing with the original block structure while calculating the scalar product.

56

This is indicated in Fig. 2.7(c). Starting from the left, A

1

and B

1

can be contracted, generating a matrix X

1

. This again sets the stage for an iterative construction. Let X

n−1

represent the scalar product contracted up to site n − 1. At the maximum, it is an D × D matrix (or, equivalently, a D

2

dimensional vec-tor in the space of the transfer matrices P

n

). Then including site n requires (i) to contract B

n

with cost O (dD

3

), followed by the contraction of A

n

again with cost O (dD

3

). Hence by working sequentially through the A and B blocks, the cost for calculating the scalar product of two MPS is O (dN D

3

).

55

2.6.2 Operator expectation values

The scalar-product above already showed all the essential ingredients that are also required for the calculation of expectation values. The scalar product calculates the matrix element h A | 1 | B i with respect to the identity operator,

1 ≡ 1

1

⊗ 1

2

⊗ . . . ⊗ 1

N

.

It may be replaced by an arbitrary other tensor product of operators that act locally simultaneously,

C ≡ ˆ c ˆ

1

⊗ ˆ c

2

⊗ . . . ⊗ ˆ c

N

. (2.21)

Figure 2.8: MPS diagram for

> > >

> > >

>

>

*

>

> i

j

^ cn

¾1 ¾2 ^cn

*

*

*

*

obtaining the matrix elements of an operator ˆ c

n

acting locally at the last site n of an effec-tive state space | s

n

i . The it-erations n

0

< n drop out due to the LR-orthonormalization of the A-tensors and hence can be short-circuited, as explained with Fig. 2.4(a) This leads to the sim-ple actual contraction of the A-tensor of site n with the local op-erator ˆ c

n

as indicated to the right.

The only extra cost is to also contract the respective operator ˆ c

n

for a given iteration n.

This can be done, for example, as a prior step to Fig. 2.7(c-i): ˆ c

n

may be contracted onto the local state space for B

n

first, resulting in ˜ B

n

, which then is contracted with X

n−1

.

Globally, assuming A = B , the expectation value then can be simply computed as h Ci ˆ

A

≡ h A | C| ˆ A i / h A | A i . Here no specific internal specific orthonormalization of the A-tensors is required. If the operator ˆ C is short range or local to some site n, however, then the global contraction to obtain the expectation value can be short-circuited by using LR/RL-orthonormalized state spaces. This is explained next.

2.6.3 Operator representation in effective state space

Matrix elements of local operators can be efficiently obtained given an effective orthonormal MPS basis set | s i

n

of the part of the system they act upon. Consider, for example, an operator ˆ C = ˆ c

n

that acts locally on site n only, as shown in Fig. 2.8. The effective state space | s i

n

is assumed to be written as MPS with LR-orthonormal A-tensors. The state space left of site n

0

= 1 describe the empty state, indicated by the bullet. Being a singleton space, it can be simply contracted, for convenience, as indicated by the vertical dashed line.

Using Fig. 2.4 then, the contraction of the A

(∗)1

tensors can be eliminated, i.e. shortcut.

This implies that now the left legs of the A

(∗)2

tensors are directly connected, such that again Fig. 2.4 applies. The argument can be repeated up to site n − 1, which is equivalent to saying, that by construction, of course, also | s

n−1

i describes an orthonormal state space.

The resulting object on the r.h.s. of Fig. 2.4 only involves the last A-tensor from site n together with the operator matrix elements of ˆ c

n

expressed in the local basis | σ

n

i . This simplification is a direct benefit of using orthonormalized state spaces throughout.

Finally, given a one-dimensional physical system with short-range interaction, the

ma-trix elements of the Hamiltonian can be constructed efficiently in an iterative fashion, as

demonstrated in Fig. 2.9. Since the effective state space | s i

n

is given in terms of an MPS up

to iteration n, also only the terms of the Hamiltonian are included that are fully contained

2. Matrix Product States

>1 >2 >n

>1 >2 >n

Hn

* * *

>1 >2 >n

>1 >2 >n

* * *

>1 >2 >n

>1 >2 >n

* * *

local terms:

nearest-neighbor terms:

h1 + … +

>

n

>

n

* Hn-1

+

>

n

>

n

*

hn

+

>

n

>

n ci1 ci2

>

*

>

*

X

i

cin-1 cin (b)

(c) (a)

n-1

n-1

+ …

Figure 2.9: MPS diagrams on obtaining the matrix elements of a one-dimensional Hamilto-nian with short range interactions only. Panel (a) shows the overall object to be calculated, indicating that H

n

acts on all sites involved. Panel (b) depicts the individual local ( P

i

ˆ h

i

) and nearest-neighbor terms ( P

n,i

c ˆ

in+1

ˆ c

in

) constituting the Hamiltonian. Panel (c) shows an efficient iterative scheme, that uses the matrix elements of H

n−1

of all terms up to and including site n − 1 obtained from previous iterations.

within the block of sites n

0

= 1, . . . , n, which is denoted by H

n

. By construction, ˆ H

n

can be built iteratively, having ˆ H

n

= ˆ H

n−1

+ ˆ h

n−1,n

where ˆ h

n−1,n

≡ ˆ h

n

+ P

i

c ˆ

in−1

⊗ ˆ c

in

describes the new terms to be added to the Hamiltonian when enlarging the block by one site (local term to site n and nearest-neighbor interaction between sites n − 1 and n, respectively).

1

As indicated in Fig. 2.9(c), the iterative scheme uses H

n−1

obtained from previous iterations.

Thus to obtain H

n

, H

n−1

is propagated to site n (first term), while the new local term h

n

(center term), as well as the nearest-neighbor terms explicitly involving site n interacting with site n − 1 (right term) need to be added. By convention, hats are reserved for operators acting in the full Hilbert space, while no hats are used for explicit matrix representations of operators.

1While operators such as the Hamiltonian may also be represented as matrix product operators.40 Within the NRG, however, this was neither required nor useful, sinceHn must only represent those terms of the Hamiltonian that are fully containedwithin the sitesn0≤n.