• Keine Ergebnisse gefunden

Polynomial Coefficients of Dyson Schwinger Equations

N/A
N/A
Protected

Academic year: 2023

Aktie "Polynomial Coefficients of Dyson Schwinger Equations"

Copied!
31
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Equations

Bachelor thesis

Bachelor of science, Physics

Niklas Affolter niklasaffolter@arcor.de

320626 - TU Berlin

Supervisors:

Prof. Dr. Dirk Kreimer - HU Berlin Prof. Dr. Andreas Knorr - TU Berlin

HU/TU Berlin, 24.10.2012

Abstract

Coeffients of the solution of the Dyson-Schwinger equations are translated from the Hopf algebra of rooted trees to the Hopf algebra of words. Therein they are expressed as shuffles of

Lie brackets of Lyndon words. This is done for up to four nested primitives and up to three different ones. Furthermore the necessary algebraic structures are presented in an accessible

way.

(2)

Contents

1 Introduction 1

2 Hopf algebras 2

2.1 Definition . . . 2

2.2 Graduation . . . 3

2.3 Primitive elements . . . 3

2.4 Convolution . . . 4

3 Lie algebras 5 3.1 Definition . . . 5

3.2 Graduation . . . 5

3.3 Lie algebras from Hopf algebras . . . 5

3.4 Universal enveloping algebras . . . 6

3.5 Lie algebras from pre-Lie algebras . . . 6

3.6 Free Lie algebras . . . 7

4 The Hopf algebra of decorated rooted trees HR 8 4.1 Motivation for trees . . . 8

4.2 Decorated rooted trees . . . 8

4.3 The commutative product m(·,·) . . . 9

4.4 Grafting operatorBd . . . 10

4.5 Coproduct ∆ . . . 10

4.6 Antipode . . . 11

4.7 Lie bracket . . . 11

5 Hopf algebras of words 13 5.1 Concatenation algebraHC . . . 13

5.2 Shuffle algebraHS . . . 13

5.3 Mappingφ fromVW to HR. . . 15

5.4 Higher order primitives . . . 15

6 Dyson-Schwinger equations 17 7 Polynomial Coefficients 21 7.1 Counting trees . . . 21

7.2 Trees written in words . . . 22

7.3 Words in shuffles of Lie brackets . . . 23

7.4 Coefficients in shuffles of Lie brackets . . . 25

8 Conclusion 27

(3)

1 Introduction

One common way to characterize a quantum field theory is to present its Lagrangian. Another, more visual way is to write down its Dyson-Schwinger equations in terms of graphs. And while quantum field theories are very popular due to them being the framework of the Standard Model of particle physics, there are still many and somewhat serious problems when trying to formulate them in a mathematical sensible way. The main issue is the process of renormal- ization, to which the introduction of Hopf algebras by Kreimer brought new understanding (see [1]). Another good review of the ongoing research in that matter was done by Kurusch and Kreimer [5]. This is also strongly linked to the trouble of getting to a non-perturbative quantum field theory. This is also where the study of the behaviour of the Dyson-Schwinger equations becomes interesting.

The goal of this thesis is to calculate low order coefficients (up to nesting-order four) of the solution of single Dyson-Schwinger equations with up to three different primitives of the same type. And to reexpress these coefficients as shuffle of Lie brackets in the Lyndon basis. In this form it is hoped to pave the way for further research in order to gain a better under- standing of non-linear Dyson-Schwinger equations, especially regarding quantum field theories.

With respect to the calculations and the problem in general it is intended that all prerequisite are explained and summarized in an easily accessible way. Therefore section two and three introduce the necessary aspects of Hopf respectively Lie algebras which form the framework for all to follow. Sections four focuses on the Hopf algebra of rooted trees HR in which the Dyson-Schwinger equations are first expressed. This will be followed by a description of the shuffle Hopf algebra into which the solutions of the Dyson-Schwinger equations will be rendered by calculation. Having explained the necessary structures the Dyson-Schwinger equations are introduced themselves in section six. Section seven will present some of the important in- termediate results (including some which may be of value for calculations of higher orders) as well as combine these to present the already mentioned coefficients of the solutions of the Dyson-Schwinger equations.

(4)

2 Hopf algebras

2.1 Definition

Hopf algebras will be the foundation of all calculations done in this thesis, yet they will not be the center of studies themselves. Therefore it is important to get a solid understanding of them, while not all mathematical aspects are elaborated here. Several Hopf algebras will be presented in the following sections, where the maps will be presented in more detail.

Formally defined a Hopf algebra is a co-/associative bialgebra H over a fieldKwith a K-linear map (antipode) S such that diagram 1 commutes. The notion of bialgebra and the necessary mathematical background can be read in Erik Panzer’s master thesis [11].

Let us look at the mappings occuring in a Hopf algebra and some important axioms:

H

H ⊗ H

H ⊗ H

K

H ⊗ H

H ⊗ H

H

η

S⊗id

id⊗S

m

m

Figure 1: Commuting diagram of the antipode.

m :H ⊗ H → H (inner product) (2.1.1)

∆ : H → H ⊗ H(coproduct) (2.1.2) η:K→ H (unit map) (2.1.3) :H →K (counit map) (2.1.4) S :H → H (antipode) (2.1.5) m◦(m⊗id) = m◦(id⊗m) (associativity) (2.1.6) m◦(η⊗id) = id=m◦(id⊗η) (neutral map) (2.1.7) m◦(id⊗S)◦∆ =η◦=m◦(S⊗id)◦∆ (inverse map) (2.1.8) This is very familiar to everyone who knows the structure of a group. Immediately we can identify m with the product of the group and S with the inverse. We have to keep in mind though

(5)

that Hopf algebras live on vector spaces, while groups live on sets. Therefore it is important to note, that all Hopf algebra maps satisfy K-linearity. The unit and counit maps arise in group axioms in a similar matter when one defines a group solely by morphisms (η:∗ →G, :G→ ∗).

However the really interesting additional structure lies in the coproduct ∆, it replaces the diagonal map (g 7→ g ⊗g,∀g ∈ G) in the axioms of a group. In graduated Hopf algebras it describes a way to assign to every element a sum of its components. Also note that the coproduct is an algebra homomorphism, while the antipode is an antihomomorphisms:

S(h1h2) = S(h2)S(h1),∀h1, h2 ∈ H ←→(g1g2)−1 =g2−1g1−1,∀g1, g2 ∈G (2.1.9) As the antipode is an important aspect of a Hopf algebra, it is interesting to note that for every connected filtered bialgebras it is possible to construct an antipode and therefore to achieve a hopf algebra structure. This and many other properties of Hopf algebras are derived in Dominique Manchons extended lecture notes [8].

2.2 Graduation

All the Hopf algebras in this thesis will be N0-graded. This means the underlying vector space V can be written as the direct sum of disjoint subvectorspaces Vk. Every element of one of this subspaces Vk is said to be of grade k.

V =M

k≥0

Vk (2.2.1)

The gradings may be the number of loops for graphs, the number of vertices or the sum of vertice weights for trees, or the number of letters for words. All the mappings will respect this grading in the following sense:

m(Vk⊗Vl)⊂Vk+l (2.2.2)

∆Vn⊂ M

k+l=n

Vk⊗Vl (2.2.3)

S(Vn)⊂Vn (2.2.4)

2.3 Primitive elements

An element h∈ H is considered to be primitive if its coproduct has this form:

h ∈Prim(H)⇔∆ (h) = 1⊗h+h⊗1 (2.3.1) All elements of grade 1 are primitive, because the coproduct respects grading and always involves the term 1⊗h+h⊗1 [8]. There may be primitives of higher grading, some of which we will construct using the convolution product.

(6)

2.4 Convolution

The space of endomorphisms of H is naturally equipped with the concatenation operation, which renders it an associative algebra. There is another product which will prove useful: For f, g ∈End(H) define the convolution ∗:

∗:End(H)×End(H)→End(H), f ∗g 7→m(f⊗g)∆ (2.4.1) The convolution equips the endomorphisms of a Hopf algebra with a group structure, the unit map is given by e = η ◦ and the inverse is the antipode. Associativity results from the associativity of the product and coproduct. We will use the convolution to construct primitive elements in the Hopf algebra of rooted trees.

(7)

3 Lie algebras

3.1 Definition

Lie algebras are well known to all mathematicians and physicists. Still a good textbook resource was written by Fuchs [7]. Here only in short their definition: A Lie algebraLis aK-vectorspace together with a bilinear mapping[·,·] :L×L → Lcalled Lie bracket, which fulfills[l, l] = 0 ∀l ∈ L (is alternating) as well as the Jacobi identity:

[a,[b, c]] + [b,[c, a]] + [c,[a, b]] = 0 ∀a, b, c∈ L (3.1.1) Antisymmetry of the Lie bracket follows directly by:

0 = [a+b, a+b] = 0 + [a, b] + [b, a] + 0 (3.1.2)

3.2 Graduation

Analogous to Hopf algebras, Lie algebras can be graduated. This will also be the case for all Lie algebras occuring in the following. The Lie bracket will also respect the grading:

[Vk, Vl]⊂Vk+l (3.2.1)

3.3 Lie algebras from Hopf algebras

The most common Lie algebras known in physics are defined in some algebra A via the com- mutator [·,·], which takes some product .and antisymmetrizes it:

[a, b] : [·,·] :A×A→A, (a, b)7→a.b−b.a (3.3.1) With this commutator the alternating condition is obviously fulfilled. If in addition this product is associative, that is A is an associative algebra, then the Jacobi identity holds as well:

[a,[b, c]]+ [b,[c, a]]+ [c,[a, b]]

= a.b.c−a.c.b−b.c.a+c.b.a + b.c.a−b.a.c−c.a.b+a.c.b

+ c.a.b−c.b.a−a.b.c+b.a.c= 0 (3.3.2) The underlined summands add to zero, but note that if A were not associative a.(b.c)−(a.b).c might not be zero. The commutator therefore equips every associative algebra - including Hopf

(8)

algebras - with the structure of a Lie algebra.

It is interesting to see, that with this commutator the prime elements of every Hopf algebra form a Lie algebra as well. To show this we have to demonstrate that the coproduct will map commutated elements into the prime elements again:

∆ ([p, q]) = ∆ (p.q−q.p)

= (p⊗1 + 1⊗p).(q⊗1 + 1⊗q)−(q⊗1 + 1⊗q).(p⊗1 + 1⊗p)

= [p, q]⊗1 + 1⊗[p, q] (3.3.3)

3.4 Universal enveloping algebras

Universal enveloping algebras are a way of finding the smallest associative algebraU(L)whose commutator corresponds to the Lie bracket in L.U(L)is constructed by taking the tensor space T(L) of the vectorspace L and then factoring out the difference between the commutator and respective Lie bracket of all elements:

U(L) = T(L)/(I), I =h{l1⊗l2−l2⊗l1−[l1, l2], l1, l2 ∈L}i (3.4.1) The ensuing enveloping algebra carries a natural Hopf algebra structure in the manner described later on the concatenating algebra 5.1. Furthermore the Milnor-Moore theorem [9] states that if the Hopf algebra H is connected, graded and of finite type then it is isomorphic to the enveloping of its prime Lie algebra U(Prim(H)).

3.5 Lie algebras from pre-Lie algebras

There is also a structure named pre-Lie algebra, which is non associative but where the com- mutator still gives rise to a Lie algebra. The condition for the mapping /to be pre-Lie is:

(a / b)/ c−a /(b / c)−(a / c)/ b+a /(c / b) = 0 (3.5.1) If we now calculate the Jacobi identity again:

[a,[b, c]]+ [b,[c, a]]+ [c,[a, b]]

= a /(b / c)−a /(c / b)−(b / c)/ a+ (c / b)/ a + b /(c / a)−b /(a / c)−(c / a)/ b+ (a / c)/ b

+ c /(a / b)−c /(b / a)−(a / b)/ c+ (b / a)/ c = 0 (3.5.2) Underlined is one occurence of the pre-Lie condition. In subsection 4.7 the appending operation on rooted trees will be introduced, which will give an example of a pre-Lie algebra.

(9)

3.6 Free Lie algebras

A good resource to read extensively about free Lie algebras is Reutenauers textbook “Free Lie Algebras” [14]. In a free Lie algebra there is defined a generating set. No relations beyond the definitions of a Lie algebra are assumed. This leads to the search for a basis of the generated vectorspace and the so called Hall bases [14]. The question is to find some rules as to which nested commutators should be considered as in the basis. For example in the case of a gener- ating set with three elements we can express all nested commutators up to depth 2 by linear combinations of:

B={a, b, c,[a, b],[b, c],[c, a],[a,[b, c]],[b,[a, c]]} (3.6.1) Because of the Jacobi identity, we may omit one of the double commutators. While this is easy to see, it becomes much more complex when looking at higher depths. As the aim of this thesis is to express some tree polynomials in terms of shuffles of Lie brackets, it is important to have defined a specific basis to the Lie algebra. The basis will be formed by the Lyndon words which will therefore be introduced here. Let all the elements of the generating set be labelled by letters with an ordering (e.g. a < b < c, ...). Call this set G. A word can be split into two parts (e.g. abaa into a and baa). Now a word is a Lyndon word if and only if for every splitting, the left word is lexicographical smaller then the right one. Some examples:

abac : a < bac, ab < ac, aba < c⇒Lyndon (3.6.2) bababa : b < ababa, ba < baba, bab=bab⇒not Lyndon (3.6.3) aaa : a < aa, aa > a⇒not Lyndon (3.6.4) To a given generating set G (an alphabet), call the set of all Lyndon words W. Then the corresponding Lie brackets L(w) to a Lyndon word w are calculated recursivly as follows: If w∈G then L(w) =w. If not find the factorization w=uv so that u and v are Lyndon words and v has maximal length. Then L(w) = [L(u), L(v)]. Some examples:

L(aaba) = [L(aab), a] = [[a,[a, b]], a] (3.6.5) L(abac) = [L(ab), L(ac)] = [[a, b],[a, c]] (3.6.6)

L(abc) = [a,[b, c]] (3.6.7)

The Lyndon basis to a given set of letters G is now simply the Lie bracketing of all Lyndon words formed by these letters. For example the Lyndon basis for the alphabet G = {a, b}

involving up to four letters is:

BL ={a, b,[a, b],[a,[a, b]],[[a, b], b],[a,[a,[a, b]]],[[a,[a, b]], b],[[[a, b], b], b]} (3.6.8)

(10)

4 The Hopf algebra of decorated rooted trees H

R

4.1 Motivation for trees

Many of the interesting structures (and problems) in quantum field theories arise in the process of renormalization. This comes from the fact that the loop integrals associated with Feynman diagrams do not converge in general (see any textbook on QFT [12]). A big step to tackle these divergencies was the introduction of the forest formula. Its very name is inspired by the tree-like structure when tracing subdivergencies of Feynman diagrams:

and

(4.1.1)

The tree associated to a graph catches its nesting-structure and factors out its - for many calculations - superfluous differentiations on where subdivergencies are inserted. For the latter fact there will be need for some compensation in counting, as we will see later in section 6 on Dyson-Schwinger equations.

4.2 Decorated rooted trees

There are two ways to construct the object tree: One is to consider an undirected graph(N, E) defined by the sets of nodes N and edges E. If we choose a node and call it root and demand that the graph has no loops (i.e. simply connected) we have defined a rooted tree. The set of all rooted trees will be called T from now on. If we furthermore define the set of decorations D, and assign to every node an element of D, the rooted trees become decorated rooted trees.

We will also use the weight of a decoration which we identify with the weight of a node, and

(11)

the weight of a tree which is simply the sum of the weights of its nodes. The number of nodes of a tree will be denoted by |t|:

wD : D → N (4.2.1)

wT : T → N, t7→ X

n∈N(t)

wD(n) (4.2.2)

| · | : T → N, t7→ |t|=|N(t)| (4.2.3) Some examples:

D={ , } , wD( ) = 1, wD( ) = 2 (4.2.4) {t ∈ T,|t| ≤2} =

n

, , , , , o

(4.2.5) {t∈ T, wT(t) = 3} =

, , ,

(4.2.6) We call the nodes adjacent to the root its children, and the root the parent of its children and draw the root above its children. We recursively repeat this for the drawing process thereby obtaining for every node except the root a node which we call its parent. Also one has to be aware of the fact that the nature of our two-dimensional notation forces us to always denote planar representatives of rooted trees. One could be mislead into assuming an ordering of the children of a node. But we have defined the children of a node solely by the set of edges and therefore all orderings are equal. For example:

= (4.2.7)

The K-vector spaceVR is the vector space on which the Hopf algebra of decorated rooted trees HR lives. Kis considered to be any field of characteristic 0, but we will do all calculations inQ. The basis of VR is generated by T ∪ {I} via the free commutative product, which will also be the algebra product m in HR. As a consequence VR is infinite dimensional (as is its generating setT).

4.3 The commutative product m(·, ·)

In the Hopf algebra of decorated rooted trees HR the product m is the free commutative product. Such products of trees are called forests (which then are just one or more disjoint trees). For example:

m

,

= =m

,

(4.3.1) In essence this multiplication is the same as the union operation in multiset theory. Multisets are sets in which elements may appear multiple times [3]. The union is commutative and

(12)

associative, and so is m. Also define the empty tree I without nodes to be the neutral element of the multiplication:

m(t,I) =t=m(I, t),∀t∈ HR (4.3.2)

4.4 Grafting operator B

d

The action of a grafting operator Bd on a forest t∈ HR is defined by adding a new root with decoration d∈ D, to which all the roots of t are connected. For example:

B ( ) = (4.4.1)

This definition is extended by linearity:

Bd(k1t1+k2t2) = k1Bd(t1) +k2Bd(t2),∀t1, t2 ∈ HR, k1, k2 ∈K (4.4.2)

4.5 Coproduct ∆

The coproduct of a tree is a linear combination of tensor products of possible stems and crowns of the tree. To get the coefficients right we introduce admissable cuts and the complete cut.

An admissable cut is a set of edges to be severed, these edges will neither appear in the stem nor in the crown. In order to be admissable, every severed edge has to be connected to the root by unsevered edges. The stem is the sets of those nodes and edges which are still connected to the root. The nodes which got an edge severed and are not in the stem are the roots of the trees which form a forest called the crown. The complete cut is no cut in the just defined sense, but delivers an empty stem and the crown is the tree (complete cut: t 7→ I⊗t). The empty cut is just the cut where the set of severed edges is empty. Call Ac(t) the set of all possible admissable cuts and the complete cut, SC and CC the stem respectively the crown of a cut:

∆ (t) = (

I⊗I if t=I P

C∈A(t)CC ⊗SC else (4.5.1)

Some examples:

= ⊗I+ 2 ⊗ + ⊗ + ⊗ +I⊗ (4.5.2)

= ⊗I+ ⊗ + ⊗ + ⊗ +I⊗ (4.5.3)

∆ ( ) = ∆ ( ) ∆ ( ) = ⊗I+ ⊗ + ⊗ +I⊗ (4.5.4) The coproduct can also be defined recursively by demanding (see [11] section 2.3.1):

∆◦Bd=Bd⊗I+ id⊗Bd

◦∆ (4.5.5)

The coproduct definition is also extended linearly.

(13)

4.6 Antipode

Call A(t) the admissable cuts of a tree t (without the complete cut), then the antipode is defined recursively by:

S(t) = (

I if t =I

−P

C∈A(t)m(SC, S(CC)) else (4.6.1)

Note that the empty cut is included, contributing a term -t. The Antipode too is extended by linearity and the algebra homomorphism property. Some examples:

S( ) = − (4.6.2)

S = − − S( ) =− + (4.6.3)

S

= − −2 S( )− S( )− S

= − + 2 − −

− −2 S( )− S( )

= − + 2 + + + (4.6.4)

4.7 Lie bracket

As HR is commutative, the Lie algebra structure induced by the commutator 3.3 is abelian and therefore not very interesting. However there is a second Lie algebra structure: Define the

“appending” operation VR× VR → VR, (f1, f2) → f1/ f2, as all possible forests arising when appending every root of f2 to a node off1. Some examples:

/ = (4.7.1)

/ = + + (4.7.2)

/ = + 2 + (4.7.3)

/ = 2 (4.7.4)

This operation is not associative, the associator is non vanishing:

( / )/ − /( / ) = / − / = + − = 6= 0 (4.7.5)

But / fulfills the pre-Lie condition 3.5.1:

(f1/ f2)/ f3−f1/(f2/ f3) = (f1/ f3)/ f2−f1/(f3/ f2) (4.7.6)

(14)

This comes quite easily, when trying to reexpress (f1/ f2)/ f3:

(f1/ f2)/ f3 =f1/(f2/ f3) + (f1/ f3)/ f2 −f1/(f3 / f2) (4.7.7) Appending f3 to f1/ f2 is the same as appendingf3 directly to f2 and then appending this to f1 (term 1), plus appending f3 first to f1 followed by appending f2 to this (term 2). But now we have to subtract where we actually appended f2 tof3 (term 3). An example:

( / )/ = + (4.7.8)

/( / ) + ( / )/ − /( / ) = /

+ /

− /

(4.7.9)

= + + − = + (4.7.10)

Note that if the left tree t1 has one node the appending operation coincides with the grafting operator Bt1:

t1/ f2 =Bt1(f2) ∀t1 ∈ T, f2 ∈ VR (4.7.11) Now define the Lie bracket as the commutator:

[·,·] :HR× HR → HR,(f1, f2)7→f2/ f1−f1/ f2 (4.7.12) The Lie bracket is by definition antisymmetric. As the appending operation is not associative, the Jacobi identity is non-trivial. But a simple evaluation using the pre-Lie property will indeed yield the Jacobi identity (see section 3.5). The Lie bracket is defined to be K-linear as usual.

Examples:

h , i

= + − (4.7.13)

[ + , ] = 2 + 2 − − (4.7.14)

With the so defined bracket HR is equipped with a second Lie algebra structure.

(15)

5 Hopf algebras of words

On both Hopf algebras presented in this section can be read in great detail in Reutenauer [14].

The basis for theK-vectorspaceVW which underlies both following Hopf algebras is generated by a set of lettersGand the free associative noncommutative product m, also called concatenation product.

5.1 Concatenation algebra H

C

The inner product in the Hopf algebra HC is the same product as the one generating the vectorspace VW, for example:

m(l1l2, l3) = l1l2l3 (5.1.1) The neutral element is the empty word e. The associated coproductδis defined for letters as

δ(l) = l⊗I+I⊗l (5.1.2)

which generalizes to words by the homomorphism property. For example:

δ(aab) = δ(a)δ(a)δ(b) = (aa⊗I+ 2a⊗a+I⊗aa)(b⊗I+I⊗b)

= aab⊗I+ 2ab⊗a+b⊗aa+aa⊗b+ 2a⊗ab+I⊗aab (5.1.3) The antipode as well has a very simple structure:

S(l1l2...ln) = (−1)nlnln−1...l1,∀l ∈G (5.1.4) It is also possible to define a Lie bracket via the usual commutator:

[·,·] :HC× HC → HC,(w1, w2)7→w2w1 −w1w2 (5.1.5) The concatenation is associative and consequently a Lie algebra structure emerges, as outlined in section 3.3.

5.2 Shuffle algebra H

S

It is also possible to define the shuffle product , which shuffles the letters of two words while preserving the ordering of letters of each word. It can be defined recursively:

l1w1l2w2 = l1(w1l2w2) +l2(l1w1w2)· (5.2.1)

ew = w=we (5.2.2)

(16)

Two examples:

ab = a(eb) +b(ae) = ab+ba

abac = a(beac) +a(abce) =a(bac+abc+abc+cab)

= abac+ 2aabc+acab (5.2.3)

The shuffle algebra on words can also be extended to a Hopf algebra HS, by introducing the decomposition coproduct δ:

δ(w) = X

uv=w

u⊗v (5.2.4)

The antipode of this Hopf algebra is the same as in the concatenation algebra 5.1.4. More details can be read in Reutenauer [14] chapter 1.5.

As with all algebras the question of finding a basis is of interest. Reutenauer [14] cites a paper of Perrin and Viennot (1981) stating that over Q the Lie brackets of the Lyndon words form a basis for the shuffle algebra. As the paper is unpublished a short argument is provided here as well. Radford [13] showed that the Lyndon words (not their bracketing) generate by the shuffle a basis for the shuffle algebra. Obviously there are as many bracketings of Lyndon words as there are Lyndon words. We first show that the bracketings of Lyndon words of specified length are linearly independent. Then we will show that a bracketed Lyndon word can not be expressed as some sum of shuffles of bracketings of less length.

First observe the following triangular property of the Lyndon bracketings:

∀l ∈ BL:l≤L(l) (5.2.5)

Where the ordering is the lexicographical one and applies to every summand on each side. The proof by induction on the length of the words (assuming the properties explained in 3.6):

∀l ∈G : l≤L(l) = l (5.2.6)

∀l ∈ BL : L(l) = [L(u), L(v)] =L(u)L(v)

| {z }

≥uv

−L(v)L(u)

| {z }

≥vu

≥uv =l (5.2.7)

Where we used the induction hypothesis and the Lyndon property. For every multiset we choose out of G, we can then take the greatest Lyndon bracketing and observe that the next lesser one will introduce a new word. Therefore they are linearly independent.

To show that bracketings of Lyndon words cannot be expressed as sum of shuffles of Lyndon bracketings of less length, we borrow the binary operation called right residual B from [10], where a detailed explanation can be found. It can be described as removing the right word from the beginning of the left word. If this is not possible it gives zero. Some examples:

abcBab=c, acbBab= 0 (5.2.8)

(17)

The right residual of bracketings is a differentiation for the shuffle product, see also [10]. Now assume a bracketing of a Lyndon word could be written as a sum of shuffles of bracketings of less length:

L(l) = X

|ki|<|l|

L(ki)...L(kn) (5.2.9)

Take the right residual of L(l) on both sides:

2|l|= 0 (5.2.10)

The right side equals zero because of the differentiation property, letting the right residual of L(l) act on words of lesser length, which will always give zero. We arrive at a contradiction and therefore have proven it is not possible to write a bracketing of a Lyndon word by shuffles of bracketings of less length.

We can repeat these two steps for Lyndon words of all lengths and have therefore deducted from the Lyndon words being a basis that also the bracketings of Lyndon words form a basis of the shuffle algebra of words. In shuffles of this basis the results of the calculations of this thesis will be presented in section 7.4.

5.3 Mapping φ from V

W

to H

R

If the letters G generating VW consist of the decorations D of HR there is a mapping φ from VW to HR. Define it on words by:

φ :W → HR, w=l1...ln 7→(l1/(...(ln−1/ ln))), li ∈G (5.3.1) Extend it to a definition on all ofVW byK-linearity. Φmaps solely onto the trees inHRwithout sidebranches and without forests. It is straightforward to see that this mapping is injective and when limited to the trees without sidebranches it is a bijection.

5.4 Higher order primitives

With the knowledge of Φand the shuffle product, we introduce a possibility to generate primi- tives inHR beyond trees with one node. Start with a tuple of letters u= (l1, ..., ln)∈ HW and calculate their full shuffle l1...ln, set its coefficient to one and map u toHW. So far:

(l1, ..., ln)7→t0 =N φ(l1...ln) (5.4.1)

(18)

Where N is a factor setting the summands coefficients to one. Now define the derivation D, which maps a tree t to |t|t and use the convolution product to define:

P0(t0) := 1

|t0|(S∗D)(t0) (5.4.2)

P(u) := N

n(S∗D)φ(l1 ...ln) (5.4.3) Due to the shuffle the ordering of theli in the tupel u does not matter. The generated sums of forests are primitives in HR and the process is called polarization [2].

∆ (P(u)) =P(u)⊗I+I⊗P(u) (5.4.4) Examples:

P(a, b) = 1

2(S∗D)φ(ab) = 1

2m(S⊗D)∆

+

= 1

2m(S⊗D)

+

⊗I+ ⊗ + ⊗ +I⊗

+

= 1 2

− − + 2 + 2

= + − (5.4.5)

P(P(a, b), a) = a / P(a, b) +P(a, b)/ a− P(a, b)

= + − + + + + − − − − +

= + 2 + + − −2 − + (5.4.6)

The Lie brackets (4.7) in the Hopf algebra of rooted trees also provide new primitives, due to the fact that the evaluation of the term ( ⊗ − ⊗ ) vanishes for primitives and , see [4].

We will not need those in the calculations, but it is this relation which allows us to apply the Milnor-Moore theorem toHR (section 3.4).

(19)

6 Dyson-Schwinger equations

If we consider quantum electrodynamics and look at the vertex function while neglecting loop corrections to the propagators, we get an equation expressed in Feynman graphs (6.0.7). It describes the fact that if we want to do calculations to higher orders of perturbations theory, we have to consider nested vertex functions as well. In addition at higher orders new vertex diagrams appear which cannot be expressed by nesting one loop vertex functions into each other.

Lets take a look at the first orders of only the vertex diagrams in quantum electrodynamics:

=

+α

+α2

+O(α(6.0.7)3)

Where the blobs represent the nesting of the vertex graph in itself andαcounts the loop order.

We can rewrite the nesting structure in terms of trees of HR as:

T = 1 +αB (T3) +α2B (T5) +O α3

(6.0.8) where represents the primitive one-loop vertex and the primitive two-loop vertex graph. The solution to this self-consistency equation can be written as a power series in αwith coefficients in HF: Γ = P

i=0αiγi. Because it will prove useful to calculate the coefficients, we also do a substitution T = 1 +αX which in this example gives:

X =B ((1 +αX)3) +αB ((1 +αX)5) +O α2

(6.0.9) One can proceed similarly with only propagators (2-point functions), but has to keep in mind that the propagators can repeat themselves on internal lines. This could be expressed by summing up all positive integer powers ofT. In that case we would add up an infinite amount of appending the 1 though, which is why we choose to sum up (T −1)k including zero. This is just the geometric series which is the power series of 1−(T1−1) = T−1. In Φ3-theory this would lead to the following equation:

T = 1−αB (T−2) (6.0.10)

Which by the substitution T = 1−αX would produce

X =B (1−αX)−2 (6.0.11)

(20)

Now Foissy [6] has discovered in his work, that there exist classes of Dyson-Schwinger equations in trees, so that their coefficients generate a Hopf subalgebra. We will choose those which correspond to the propagators of quantum field theories and calculate the first orders of their solutions.

The α coefficient keeps track of the loop order of the corresponding graphs:

X =

X

i=0

ci+1αi (6.0.12)

While the trees themselves have coefficients inN. These can be calculated as a product of some values of ther vertices:

ci =X

t∈Ti

f(t)t (6.0.13)

f :T →K, t7→Y

v∈t

v (6.0.14)

From Foissy we know that the equations generating Hopf subalgebras in general look like (J ⊂N):

X =X

j∈J

Bj

(1−µX)λjµ+1

, µ 6= 0 (6.0.15)

or:

X =X

j∈J

Bj eλjX

(6.0.16) We set λ=±1, µ∈ {0,1} and replaceX →αX, Bj →αjBvj:

X =X

j∈J

αj−1Bvj (1±αX)±j+1

(6.0.17)

X =X

j∈J

αj−1Bvj e±jαX

(6.0.18) Where both signs in the exponential generate the same Hopf subalgebra, as can be seen by:

α→ −α, e+jαX →e−jαX, X =

X

i=0

ciαi →X =

X

i=0

(−1)iciαi (6.0.19) For|J|= 1 This leads in all cases to

X =αj−1X

k=0

dkαkB(Xk) (6.0.20)

The dk ∈ K determine the factor a vertex with k children contributes to the coefficient of a tree:

v =dkA(v) (6.0.21)

(21)

Where A(v) is a multinomial coefficient, counting the number of different permutations of v’s subtrees when embedded into the plane. This multinomial coefficient comes directly from Xk. In the exponential case the dks follow directly from the formal series/definition of the exponential:

Bj(ejαX) =

X

k=0

αkjk

k!Bj(Xk)⇒dk= jk

k! (6.0.22)

In the case of positive powers we get binomial coefficients Bj−1 (1 +αX)j

=

j

X

k=0

αk j

k

Bj−1(Xk)⇒dk = j

k

= j!

(j−k)!k! (6.0.23) In the case of negative powers, consider j = 2 first:

B2 (1−αX)−1

=

X

k=0

αkB2(Xk) (6.0.24)

(j −1)!(1−αX)−j = α1−j dj−1

dXj−1(1−αX)−1 =

X

k=j−1

αk−j+1 k!

(k−j+ 1)!Xk−j+1

=

X

k=0

αk(k+j−1)!

k! Xk (6.0.25)

Bj+2((1−αX)n) =

n

X

k=0

αk(k+j−1)!

(j−1)!k! Bj+2(Xk)⇒dk = (k+j−1)!

(j −1)!k! (6.0.26) Also the possibility of rewriting X:

X˜ = 1±αX ⇒X˜ = 1±X

j∈J

αjBvj

±j+1

(6.0.27) With the number of children n (k above) and the fertility f (j above) this leads to the vertex coefficients:

v+=A f!

(f−n)!n! = 1 S

f!

(f −n)! (6.0.28)

v =A(n+f −1)!

n!(f−1)! = 1 S

(n+f−1)!

(f−1)! (6.0.29)

ve=Afn n! = 1

Sfn (6.0.30)

These coefficients count how often it is possible to nest n vertex (v+) or propagator (v) graphs into a graph with f vertices respectively propagators. While the exponential case can be treated

(22)

in the same way as vertex or propagators, its physical meaning is still under research and will not be considered hereafter.

For vertices every nesting spot can only be taken once, it follows immediately that there are f(f − 1)...(f −n + 1) possibilities for that. Propagators on the other hand can be nested more then once on the same propagator. For every propagator nested, there is an additional option present for where to nest the next propagator, therefore there are f(f+ 1)...(f+n−1) possibilities. These considerations are in accordance with what we found above for v+ and v.

Let us look at the already mentioned equation:

X = B ((1 +αX)3) +αB ((1 +αX)5) +O α2

(6.0.31)

X =

X

i=0

ci+1αi, P1 := , P2 := (6.0.32)

Considering that the fertility of is 3 and the fertility of 5 the first orders of the solution can be calculated iteratively to:

c1 = , c2 = + 3 , c3 = 9 + 3 + 5 + 3

c4 = 27 + 9 + 18 + + 15 + 15 + 9 + 10 + 6 + 5 (6.0.33) A small test as to wether this could generate a Hopf subalgebra so far:

∆ (c3) = c3⊗1 + 1⊗c3+ 15 ⊗ + 9 ⊗ + 3 ⊗ + 5 ⊗ + 3 ⊗ (6.0.34)

= c3⊗1 + 1⊗c3+ 3c2⊗c1+ 5c1 ⊗c2

∆ (c4) = c4⊗1 + 1⊗c4+ 5 ⊗ (6.0.35)

+ 63 ⊗ + 45 ⊗ + 27 ⊗ + 30 ⊗ + 9 ⊗ + 18 ⊗ + 21 ⊗ + ⊗ + 35 ⊗ + 15 ⊗ + 21 ⊗ + 15 ⊗ + 15 ⊗ + 9 ⊗ + 10 ⊗ + 6 ⊗

= c4⊗1 + 1⊗c4+ 7c1⊗c3+ 5c2 ⊗c2+ 3c3⊗c1+ 10c21 ⊗c2+ 6c1c2⊗c1+c31⊗c1 In the remainder we will focus on one-equation systems (only propagator or only vertex nesting) with one to three primitives involved.

(23)

7 Polynomial Coefficients

7.1 Counting trees

The first step in the calculation is to list the coeffients associated to each tree. We will use the notation ab

which evaluates to v± with fertility a and number of childrens b. The fertilities n and n1 are associated to primitive n2 to primitive and n3 to . The coefficients calculated depend therefore only on the fertilities and are differentiated by how many different primitives occur in what quantity. A solution coefficient c3 of a Dyson-Schwinger equation for the loop order 3 with 1- and 2- loop primitives would then be the sum of γ3 and γ1,1.

γ1 = (7.1.1)

γ2 =n (7.1.2)

γ1,1 =n1 +n2 (7.1.3)

γ3 =n2 +1 2

n 2

(7.1.4) γ2,1 =n21 +n1n2

+

+1

2 n2

2

+ n1

2

(7.1.5) γ1,1,1 =n1n2

+

+n1n3

+

+n2n3

+

+

n1

2

+ n2

2

+ n3

2

(7.1.6) γ4 =n3 + n

2 n

2

+n n

2

+ 1 6

n 3

(7.1.7) γ3,1 = n31 +n21n2

+ +

+ 1 2

n1 3

+ 1 6

n2 3

+ n1 n1

2

+n1 2

n2 2

+ n2 2

n1 2

+ n1 n1

2 +

+n2

n1 2

+n1 n2

2

(7.1.8) In order to avoid drawing 64 different trees, we use the notation t(a1(a2)). A bracketed term after a decoration indicates that it is appended to the preceding node and the ai indicate the decorations:

γ1,1,1,1 = X

(i,j,k,l)∈Π(1,2,3,4)

ninjnkt(ai(tj(tk(tl)))) + 1 2ni

nj 2

t(aiaj(akal)))

+ ni

2

njt(ni(nj(nk)nl))) +1 6

ni 3

t(ai(ajakal))

(7.1.9)

(24)

7.2 Trees written in words

Trees without sidebranches translate trivially into words and are therefore omitted here. As a reminder:

=abaa (7.2.1)

The letter a is associated to and b to . Brackets in words indicate a primitive formed by the letters in the brackets, eg. a(ab) := aP(ab). The same holds for powers of a letter, eg.

a2 :=P(aa). The only words difficult to find are those for trees with more then one child at the root. This is because:

Φ−1

= Φ−1◦Ba

=aΦ−1

(7.2.2) The trees with three nodes:

= 2(aaa−aa2) (7.2.3)

= 2(baa−ba2) (7.2.4)

= aab+aba−a(ab) (7.2.5)

= abc+acb−a(bc) (7.2.6)

Here too trees with fewer different decorations are just special cases of those with more. Trees with four nodes and only one decoration:

= 2(aaaa−aaa2) (7.2.7)

= 3(aaaa−aa3) + 2(a(aa2)−aaa2−aa2a) (7.2.8)

= 6(aaaa+aa3+a(aa2)−aaa2−aa2a) (7.2.9)

(25)

Trees with four nodes having one node with a different decoration:

= 2(baaa−baa2) (7.2.10)

= 2(abaa−aba2) (7.2.11)

= aaba+aaab−aa(ba) (7.2.12)

= 3(ba3−ba2a) + 6(baa2−baaa) (7.2.13)

= (−3aaab−2aaba−2abaa−a(aab)

+ 4aa(ab) +a(ab)a+ 2aba2+aa2b) (7.2.14)

= 3baaa−ba2a−baa2 (7.2.15)

= 1

2(−3aaab−2aaba−2abaa−a(aab)−2a(a2b)

+ 4aa(ab) +a(ab)a+ 4aba2+ 3aa2b) (7.2.16)

= 1

2(5aaab+ 6aaba+ 8abaa+a(aab) + 2a(a2b)

− 6aa(ab)−3a(ab)a−8aba2−3aa2b) (7.2.17)

= −aa2b+ 2aaab+aaba−aa(ba) (7.2.18) Trees with four different nodes:

= abcd+abdc−ab(cd) (7.2.19)

= 1 2a1

(b1b2b3) + X

(i,j,k)∈Π(1,2,3)

2bibjbk−bi(bjbk)− 1

2(bjbk)bi

 (7.2.20)

= 1

2(ab{c, d}+a{c, d}b+ab(cd) +a(cd)b−a(b{c, d})−a(b(cd))) (7.2.21) Here the primitive Lie brackets in HR appear denoted as {a, b}.

7.3 Words in shuffles of Lie brackets

Words with two letters:

aa = 1

2aa (7.3.1)

ab = 1

2(ab+ [a, b]) (7.3.2)

ba = 1

2(ab−[a, b]) (7.3.3)

(26)

And words with three letters:

aaa = 1

6a3 (7.3.4)

aab = 1

12(2aab+ 2[a,[a, b]] + 3a[a, b]) (7.3.5) aba = 1

12(2aab−4[a,[a, b]]) (7.3.6)

baa = 1

12(2aab+ 2[a,[a, b]]−3a[a, b]) (7.3.7) abc= 1

6abc+ 1

4a[b, c] + 1

4c[a, b] +1

3[a,[b, c]] + 1

6[[a, c], b] (7.3.8) Or if we abandon writing Lie brackets in Lyndon bracketing for a moment, we observe that we do: A full shuffle, twice a shuffle of the outer letter with the commutator of the two remaining letters, and a commutator of the first letter with a commutator of the remainder, and repeat the last step for the word backwards:

abc= 1

12(2abc+ 3a[b, c] + 3c[a, b] + 2[a,[b, c]] + 2[c,[b, a]]) (7.3.9) The advantage of this formula is, that we do not lose sight of the symmetries involving the commutators. Also that aab, aba and baa are special cases of abc is immediate. Continue with words with four letters:

aaaa = 1

24a4 (7.3.10)

aaab = 1

96 4a3b+ 9a2[a, b] + 8a[a,[a, b]]−6[a,[a,[a, b]]]

(7.3.11) aaba = 1

96 4a3b+ 3a2[a, b]−8a[a,[a, b]]−18[a,[a,[a, b]]]

(7.3.12) abaa = 1

96 4a3b−3a2[a, b]−8a[a,[a, b]] + 18[a,[a,[a, b]]]

(7.3.13) baaa = 1

96 4a3b−9a2[a, b] + 8a[a,[a, b]] + 6[a,[a,[a, b]]]

(7.3.14) abcd = 1

20(5[a,[b,[c, d]]] + 3[[a,[b, d]], c]−2[[a, c],[b, d]] + 2[[a,[c, d]], b]

+ 2[[a, d],[b, c]] + 1[[[a, d], c], b]) + 1

24abcd

+ 1

12(2a[b,[c, d]] +a[[b, d], c] + 2d[a,[b, c]] +d[[a, c], b])

+ 1

40(3[c, d]ba+ 1[b, d]ac+ 3[b, c]ad

− 1[a, d]bc+ 1[a, c]bd+ 3[a, b]cd+ 5[a, b][c, d]) (7.3.15)

(27)

To express abcd a modified algorithm for the decomposition into the Radfod basis was utilized (see [10]).

7.4 Coefficients in shuffles of Lie brackets

Finally we combine our calculations to reexpress the tree-sum-coefficients from section 7.1.

Two considerations are available to check for possible faults in the results: When all involved fertilities equal one, only the shuffle coefficient may survive. Also, if in terms of trees γy,x−y = qγx, q ∈ Q after identifying two decorations in γy,x−y, then the same has to hold for the coeffients in terms of shuffles and Lie brackets.

γ1 =a (7.4.1)

γ2 = n

2aa (7.4.2)

γ1,1 = 1

2((n1 +n2)a1a2+ (n1−n2)[a1, a2]) (7.4.3) We observe that for a1 =a2 ⇒γ1,1 = 2γ2 as predicted.

γ3 = 1 6a3

n2+

n 2

− 1 2

n 2

(aa2+ [a, a2]) (7.4.4) γ2,1 = 1

6a21a2

n21+ 2n1n2 + 2 n1

2

+ n2

2

(7.4.5) + 1

6[a1,[a1, a2]]

n21−n1n2− n1

2

+ n2

2

+ 1

4a1[a1, a2]

n21−n1n2+ n1

2

− n2

2

− 1 2

n2 2

a2 a21+ [a2, a21]

− 1 2

n1 2

(a2(a1a2) + [a1,(a1a2)])

This time we have to keep in mind that after identifyinga1 witha2the primitive(a1a2)becomes two times a2. Incorporating this we see that γ2,1 = 3γ3 again in accordance with prediction.

The P

cycl signify that the triplet (i, j, k)runs through the cyclic permutations of (1,2,3)

(28)

γ1,1,1 = 1 3

3

X

i=1

ni 2

+

3

X

j>i

ninj

!

a1a2a3

+ 1 6

X cycl

ni(nj−nk)− nj

2

+ nk

2

[ai,[aj, ak]]

+ 1 4

X cycl

ni(nj−nk) + nj

2

− nk

2

ai[aj, ak]

− 1 2

X cycl

ni 2

(ai (ajak) + [ai,(ajak)]) (7.4.6)

As there are two non vanishing cyclic permutations this again fulfillsγ1,1,1 = 2γ2,1 if we identify a3 with a1. Finally the coefficient with four nodes of one decoration:

γ4 = 1 24a4

n3 + 4n n

2

+ n

3

− 1

12a2a2

10n n

2

−3 n

3

(7.4.7) + 1

6[a,[a, a2]]

n

n 2

+

n 3

− 1

4a[a, a2]

3n n

2

+ n

3

+ 1

2 a(aa2) + [a,(aa2)]

n

n 2

+

n 3

−1

2 aa3+ [a, a3]

3n n

2

− n

3

There are some intermediate results presented for the calculation of the γ1,1,1,1 coefficient. But due to the rapidly increasing complexity of the involved terms it has not been calculated in this thesis.

(29)

8 Conclusion

Except forγ4all the results and intermediate results were calculated seperately for combinations of equal and different decorations. This allowed to check the results for selfconsistency, a circumstance increasing the reliability of the results significantly, as the computations are quite tedious and lengthy. Additionally some necessary precalculations were done for γ1,1,1,1, which may be of use in further work. For higher order calculations it will propably prove unavoidable to write and use some computer algorithms to prevent errors from occuring.

A next possible step would be the search for patterns in the results of different orders, maybe revealing a direct formula to calculate the rational factors of the shuffles. For this the coefficients with all different decorations are the most interesting, as they are the most general cases immediately yielding the other coefficients. However when calculating higher order coefficients (greater then three), the conversion from trees into words stops to be one to one even if excluding Lie bracket primitives. This will make the results even more complex, but at the same time the freedom of choice gained may be valuable in order to simplify the terms.

The coefficients in their current form also do not include Dyson-Schwinger equation systems which incorporate propagators as well as vertices (or different propagators/vertices). Taking those into account would not change the nature of the calculations, but the v± factors which count trees (7.1) would need to distinguish their children, adding to the complexity of the results.

It may also be that it will be most useful to focus on one quantum field theory, and make use of additional simplifications coming from explicit evaluation of Feynman graphs. This may according to Professor Kreimer be well one of the next steps his group is going to undertake.

(30)

References

[1] C Bergbauer and Dirk Kreimer. Hopf algebras in renormalization theory: Locality and dyson-schwinger equations from hochschild cohomology. IRMA Lect.Math.Theor.Phys., 10:133–164, 2006.

[2] Isabella Bierenbaum, Dirk Kreimer, and Stefan Weinzierl. The next-to-ladder approxima- tion for linear dyson-schwinger equations. Physics Letters B, 646:129–133, 2007.

[3] Wayne D Blizard. Multiset theory. Notre Dame Journal of Formal Logic, 30:36–66, 1988.

[4] Alain Connes and Dirk Kreimer. Renormalization in quantum field theory and the riemann- hilbert problem ii: the β-function, diffeomorphisms and the renormalization group. Com- mun.Math.Phys., 216:215–241, 2001.

[5] Kurusch Ebrahimi-Fard and Dirk Kreimer. The hopf algebra approach to feynman diagram calculations. Journal of Physics A, 2005.

[6] Loïc Foissy. Faà di bruno subalgebras of the hopf algebra of planar trees from combinatorial dyson-schwinger equations. to appear in Advances in Mathematics, 2007.

[7] Jürgen Fuchs and Christoph Schweigert. Symmetries, lie algebras and representations.

Cambridge Univ. Press, 1997.

[8] Dominique Manchon. Hopf algebras, from basics to applications to renormalization. Re- vised and updated version, may 2006.

[9] J. W. Milnor and J.C Moore. On the structure of hopf algebras. Ann. of Mathematics, 2:211–264, 1965.

[10] Minh Hoang Ngoc and Michel Petitot. Lyndon words, polylogarithms and the riemann ζ function. Discrete Maths, 217:273–292, 2000.

[11] Erik Panzer. Hopf-algebraic renormalization of kreimer’s toy model. master thesis, 2012.

[12] Michael E. Peskin and Daniel V. Schroeder. An introduction to quantum field theory.

Westview Pr., 2007.

[13] D.E. Radford. A natural ring basis for the shuffle algebra and an application to group schemes. Journal of Algebra, 58:432–454, 1979.

[14] Christophe Reutenauer. Free Lie Algebras, volume 7 of London Mathematical Society Monographs New Series. Oxford Sciene Publications, 1993.

(31)

Acknowledgements

Hereby I would like to thank Professor Kreimer for supervising my thesis even though I am not a Humboldt university student and for the various confusing and enlightening talks we had.

I also give my thanks to Professor Knorr for supervising from the TU Berlin side making this thesis possible.

Zusammenfassung

In dieser Bachelorarbeit werden einzelne Dyson-Schwinger Gleichungen betrachtet die Vertex beziehungsweise Propagator Verschachtelungen in Quantenfeldtheorien beschreiben. Es wer- den Lösungskoeffizienten bis zur vierten Ordnung in der Hopf Algebra der dekorierten Wurzel- bäume ausgerechnet. Diese werden im Anschluss in die Shuffle Hopf Algebra in der Basis der Lyndon-Lie-Klammern umgerechnet. Die Zwischen-/Resultate werden präsentiert und auf Selbstkonsistenz geprüft. Zusätzlich werden die vielfältigen algebraischen Begriffe und Struk- turen im Vorfeld erläutert, in einer Weise die einen einfachen und schnellen Zugang zur Materie ermöglichen soll.

Version

An addition was made to the argumentation on why the bracketings of Lyndon words generate a basis of the shuffle algebra over words. Apart from that this is the bachelor thesis as it was handed in on the 25th of October 2012.

Referenzen

ÄHNLICHE DOKUMENTE

The output of feyngen can be piped into feyncop to calculate the reduced coproduct of all 1PI graphs of a given loop order and residue type.. By default, the subgraphs composed

The propagators then satisfy a system of equations involving the insertion operators, called systems of Dyson-Schwinger equations... We here assume that f is

The point is that for any given quantum field theory, there always exists a Dyson-Schwinger equation, whose solution is simply related to the log-expansion of the probability

We show how to use the Hopf algebra structure of quantum field theory to derive nonperturbative results for the short-distance singular sector of a renormalizable quantum field

Hopf subalgebras of the Hopf algebra of rooted trees coming from Dyson-Schwinger equations and Lie algebras of Fa di Bruno type..

We present the lattice structure of Feynman diagram renormalization in physical QFTs from the viewpoint of Dyson–Schwinger–Equations and the core Hopf algebra of Feynman diagrams..

The free dendrifrom algebra on one gener- ator is described in Loday and Ronco [1998] in terms of planar binary trees, obtaining a Hopf algebra on these objects known as the

We view Feynman rules as characterized by a linear and multiplicative map φ : H → A taking a Feynman graph to an element of some target algebra A of, say, smooth functions depending