• Keine Ergebnisse gefunden

Differentiation and integration

Functions of anticommuting numbers. Due to (B.2), an arbitrary function f can at most be linear in any of the generatorsχ1, χ2, . . .(or any other anticommuting degree of freedomζ∈ AF).

Put differently, the Taylor series off1, χ2, . . .) terminates at first order. Hence the most general function of a single Grassmann variablef(χ) can be written in the form

f(χ) =a0+χa1, (B.22)

where a0, a1 ∈ A may contain commuting and anticommuting terms, but no dependence on χ.

Note that in order for f to have definite parity, we need a0 ∈ ABa1 ∈ AF and vice versa.

Similarly, functions of several variables can be written out in a multilinear form. For example, the most general function of three Grassmann generatorsf1, χ2, χ3) is

f1, χ2, χ3) =a000+χ1a100+χ2a010+χ3a001+χ1χ2a110+χ1χ3a101+χ2χ3a011+χ1χ2χ3a111, (B.23) where again all the coefficientsaijk are independent ofχ1,χ2, andχ3. Moreover, if f depends on all generators of the Grassmann algebra, the coefficients similar toaijkcan be chosen as c-numbers without loss of generality.

Differentiation. Derivatives of a function f(χ) with respect to the Grassmann generator χ are defined by analogy with ordinary calculus as the linear part of the corresponding Taylor expan-sion,

∂χ(a0+χa1) =a1. (B.24)

The so-defined operation is obviously linear. However, care must be taken regarding the order of terms. For instance, ifa1∈ AF, then

∂χ(a0+a1χ) =

∂χ(a0χa1) =−a1. (B.25)

Generally speaking, by exploiting linearity, the differential operator

∂ /∂χ thus behaves like an element ofAF:

∂χ(a0+a1χ) =

∂χ(a1χ) =−a1

∂χχ=−a1. (B.26)

In other words,

∂ /∂χ acts on everything to its right and eliminates a Grassmann generatorχ if it stands immediately next to it. This is also called the left-derivative as the dependent variable χ has to be moved to the left in every product, observing anticommutation, before the derivative can act. Similarly, we can define aright-derivative

(a0+χa1)

∂χ = (−a1χ)

∂χ =−a1. (B.27)

Since any function is at most linear in the generator χ, higher derivatives always vanish, i.e.,

2/∂χ2=

2/∂χ2 = 0. For the product f(χ)g(χ) of two functions f(χ) =a0+χa1 and g(χ) = b0+χb1, the familiar product rule from ordinary calculus holds as well, provided that the respective factors have definite parity and the anticommuting character is respected. Denoting the parity of f byς(f), we then verify straightforwardly that

∂χ[f(χ)g(χ)] =

∂ f(χ)

∂χ g(χ) + (−1)ς(f)f(χ)

∂ g(χ)

∂χ . (B.28)

Similarly, derivatives of several variables can be combined by treating every operator like an element ofAF,

k

∂χ1· · ·∂χk

:=

∂χ1

· · ·

∂χk

. (B.29)

For instance, taking the functionf(χ1, χ2, χ3) from (B.23), we find

2f1, χ2, χ3)

∂χ1∂χ3 =−a101+χ2a111. (B.30)

Grassmann integral. Even though there is no classical geometrical picture such as a number line for Grassmann numbers, we can define an integral over anticommuting degrees of freedom by requiring certain properties of ordinary (Riemann or Lebesgue) integrals. As we will see, demanding linearity and shift invariance of the measure defines the Grassmann-Berezin integral uniquely up to a multiplicative constant. As observed in Eq. (B.22), the most general function of one Grassmann variable isf(χ) =a0+χ a1. By linearity, we mean that

Z

dχ(a0+χ a1)=! Z

a0+ Z

dχ χ

a1. (B.31)

Note that it is meaningless to define a domain of integration because the Grassmann algebra does not have a geometry. In this sense, “R

dχ· · ·” is to be read as an integral over all possible values ofχ, i.e., it is similar to a real integral of the form “R

−∞dx· · ·”. This leads to the second defining property. Since real integrals over an infinite domain are invariant under a shift of the integration variable, we demand that the Grassmann integral be invariant under the transformation χ7→χ0:=χ+ζ, whereζis a second Grassmann variable independent ofχ. More precisely,

Z

dχ(a0+χ a1)=! Z

0[a0+ (χ0ζ)a1] = Z

0[(a0ζa1) +χ0a1]. (B.32) Applying the linearity condition (B.31) to this relation, we immediately find R

dχ = 0. Further-more,R

dχ χmust evaluate to a constant, which we choose to be 1 by convention (again, different choices can be found in the literature). Hence we define the Grassmann-Berezin integral as

Z

dχ(a0+χa1) :=a1. (B.33)

Remarkably, this is the same relation as we obtained for the (left-)derivative in Eq. (B.24): For anticommmuting numbers, differentiation and integration are the same operation. In summary, the defining relations for differentiation and integration of Grassmann variables are, in our con-vention,

∂χ(a0+χa1) = Z

dχ(a0+χa1) =a1, (B.34)

where a0, a1∈ Amay contain both commuting and anticommuting variables, but no dependence onχ. Similarly to higher-order derivatives, multiple integrals can be evaluated sequentially.

Linear transformations. Besides shifting integration variables, we will also want to scale them occasionally. Therefore, we consider the transformationχ7→χ0 :=witht∈ AB. Requiring

1 = Z

dχ χ=! Z

0χ0= Z

0tχ , (B.35)

we find that we need to have dχ0 =t−1dχ. This should be contrasted with the transformation of differentials for c-numbers, where d(tx) =tdx(t= const).

For the higher-dimensional generalization, consider a Grassmann vectorχ= (χ1, . . . , χN) and the linear transformation χ 7→χ0 :=T χ with T ∈ ANB×N, i.e.,χ0α =P

βTαβχβ. The nonvanishing part of the integrand in an integral over dχ0 = dχ01· · ·dχ0N is proportional to

χ01· · ·χ0N = X

α1,...,αN

T1· · ·TN αNχα1· · ·χαN = X

σ∈Sym(N)

T1σ(1)· · ·TN σ(N)χσ(1)· · ·χσ(N),

(B.36) where Sym(N) denotes the symmetric group of degree N, i.e., the set of all permutations of {1, . . . , N}. Here the second equality holds because all terms where any two indicesαi andαj (i6=

j) coincide vanish due to the Grassmann character of theχα. Rearranging the vector components, we can equivalently write this as

χ01· · ·χ0N = X

σ∈Sym(N)

sgn(σ)T1σ(1)· · ·TN σ(N)χ1· · ·χN = (detT)χ1· · ·χN. (B.37) Consequently, for Grassmann variables the N-dimensional differential transforms as d(T χ) = (detT)−1dχ, in contrast to the c-number case, where d(T x) = (detT) dx.

C Gaussian integrals

Gaussian integrals over commuting and anticommuting variables (cf. Appendix B) feature promi-nently in the derivations of Sec. 3.4 in particular. Here we collect basic properties of such inte-grals for commuting (Appendix C.1), anticommuting (Appendix C.2), and supersymmetric (Ap-pendix C.3) variables. We also discuss the related supersymmetric Hubbard-Stratonovich trans-formation in Appendix C.4.

C.1 Commuting variables

One real variable. Since all Gaussian integrals over real- or complex-valued vectors can be reduced to the real, one-dimensional case, this forms the natural starting point of our collection of results.

To this end, leta >0. Then

I(a) :=

Z

−∞

dxe−ax2 = rπ

a. (C.1)

Indeed, upon substitutingx7→√

ax, we findI(a) =I(1)/

a. The squareI(1)2 can be computed using polar coordinates,

I(1)2= Z

dxdye−(x2+y2)= Z

0

dr Z

0

dφ re−r2 =−πe−r2

r2=0

=π . (C.2)

Consequently, we haveI(1) =

π, and (C.1) follows.

One complex variable. Next we consider a Gaussian integral over a single complex variable and

find Z

dzdze−a|z|2 =2π

a . (C.3)

To prove this, we first observe that, by definition, the complex integral consists of two real integrals, cf. Eq. (A.4). Therefore, we can exploit the result (C.1) for real-valued Gaussian integrals and obtain

Z

dzdze−a|z|2 = 2 Z

d(Rez) Z

d(Imz) e−a[(Rez)2+(Imz)2] = 2Z

dxe−ax2 2

, (C.4)

leading to (C.3).

Multiple complex variables. We generalize the result to an arbitrary number N of complex variables, collected in a vectorz= (z1, . . . , zN). IfA∈CN×N is Hermitian (A=A) and positive definite, then

Z

[dzdz] e−zAz=(2π)N

detA . (C.5)

Here [dzdz] :=Qn

α=1dzαdzα. To prove Eq. (C.5), we notice that, sinceA is Hermitian, there exists a unitary matrixU ∈U(N) such thatU AU= diag(a1, . . . , aN), whereaαare the real eigen-values of A. Moreover, sinceA is positive definite,aα >0. Using the coordinate transformation z7→z0:=U z, we then obtain

Z

[dzdz] e−zAz= Z

[dzdz] eP

αzα0∗aαz0α

=Y

α

Z

dzα0 dz0∗α e−aα|z0α|2 =Y

α

aα, (C.6) where we used (C.3) in the last step. Hence we have shown (C.5).

According to (C.5), we can express the determinant of a Hermitian, positive definite matrixAin terms of a Gaussian integral. We will now show how individual matrix elements of the inverse A−1 arise as the correlators or moments of the Gaussian distribution. To this end, we define the generating function

Z(h, h) :=

Z

[dzdz] e−zAz+hz+zh. (C.7)

This is indeed the generating function of moments of a Gaussian distribution (up to normalization) with covariance matrixA−1because

∂Z(h, h)

∂hµ1· · ·∂hµk∂hν1· · ·∂hνl

h=h=0

= Z

[dzdz]zµ1· · ·zµkzν

1· · ·zν

le−zAz. (C.8) As usual, handh are treated as independent variables for differentiation, basically in the same way as for the case of integration, in the sense of the coordinate transformation (A.5) above.

Since A is positive definite, it is invertible and we can shift the integration variable in (C.7) as z7→z0:=zA−1hto obtain

Z(h, h) = ehA−1h Z

[dz0dz0∗] e−z0†Az0 =(2π)N

detA ehA−1h. (C.9) With this explicit expression for the generating function, we can compute arbitrary moments of the form (C.8). For the second-order correlator, we find

Z

[dzdz]zµzνe−zAz= (2π)N

detA(A−1)µν, (C.10)

establishing the aforementioned connection between correlators and matrix elements ofA−1. For higher-order correlators, we are left with the Isserlis-Wick theorem [239, 240] upon repeated dif-ferentiation according to (C.8). It states that the correlator is obtained by summing over all pairs of z and z variables and replacing any such pair by the corresponding second-order correlator.

In particular, this means that we always need an equal number of z and z terms to obtain a nonvanishing contribution. More explicitly, we thus find

Z

[dzdz]zµ1· · ·zµkzν1· · ·zνke−zAz=(2π)N detA

X

σ∈Sym(k)

(A−1)µ1νσ(1)· · ·(A−1)µkνσ(k), (C.11) where Sym(k) denotes the symmetric group of degree k, i.e., the set of all permutations of {1, . . . , k}.