• Keine Ergebnisse gefunden

Proof of the Reservoir Lemma

of at mostO(nr−1)steps.

Spike path version

The statement of the lemma is almost the same as for the tight path version, Lemma 4.2.

Lemma 4.5(Spike path Lemma). For eachr ≥ 3there existc, C >0and a deterministicO(nr−1)-time algorithm whose input is ann-vertexr-uniform hypergraphG, a pair of distinct(r−1)-tuplesuandv, a set S⊂V(G)and a(r−1)-uniform exposure hypergraphEon the same vertex set. The output of the algorithm is eitherFailor a spike path of even lengtho(logn)inGwhose ends areuandvand whose interior vertices are inS, and an exposure hypergraphE0⊃ E. We havee(E0)≤e(E) +O(|S|r−2)and all the edgesE(E0)\E(E) are contained inS∪u∪v.

Suppose that Gis drawn from the distributionH(r)(n, p)withp≥ C(logn)3/n, thatE does not contain any edges intersecting bothS andu∪v. If furthermore we have|S|=Cp−1lognand|e(E[S])| ≤c|S|r−1 then the algorithm returnsFailwith probability at mostn−5.

Sketch proof. We modify the proof of Lemma 4.2 in the following simple ways. First, we will maintain fans of spike paths rather than tight paths, and we change Algorithm 3 line 5 so that the tupleato be extended is the (unique) one whose extension continues to give us a spike path. Note that whenever we have a spike path ending inaand we extend the spike path by adding one vertexbthen the end of the new spike path is an(r−1)-set whose vertices are contained in(a, b)(though in general not the lastr−1vertices nor in the same order). This is all we need to make our analysis of the fan construction work; it is not necessary to change anything in this part of the proof or the constants.

Second, when we come to connect fans, we letLbe the reverses of the end tuples ofFt(u)andL0be the end tuples ofFt(v), and (again) look for a tight path connecting a tuple inLto one inL0. This has no effect on the proof that a connecting path from some member ofLto some member ofL0exists, and the result is the desired spike path. The resulting spike path is of even length as both fans have the same size.

4.4 Proof of the Reservoir Lemma

Idea

The reservoir pathPreswill consist of absorbing structures (eachcarryingone vertex fromR). More precisely, these absorbing structures can be seen as small reservoir path with reservoir of cardinality 1. Each of these small absorbers consists of a cyclic spike path plus the reservoir vertex, where pairs of spikes are additionally connected with tight paths (cf. Figure 4.1).

First choose the reservoir setRand disjoint setsU1,U2andU3. For every vertex inRwe will reveal the necessary path segment inU1. From the endpoints of these path we fan out and also close the backbone structure of the reservoir insideU2. Finally, we useU3 and Lemma 4.2 to get the missing connections in the reservoir structures and connect all structures to one pathPres. In each step the relevant edges of the exposure graphEare solely coming from the same step.

48

4. Finding tight Hamilton cycles in random hypergraphs

Proof

We arbitrarily fix the reservoir setRof size2Cp−1lognand disjoint setsU1,U2 andU3of the same size such thatS =R∪U1∪U2∪U3is of sizen4. First we want to build the absorbing structures for everya∈R, which have size roughlyt2=o(log2n). There is a sketch of this structure for somea∈R in Figure 4.1.

a u

a

v

a

P

1

P

2

. . . P

t−1

P

t

x

1

x

2

x

3

. . . x

t−1

x

t

y

t

y

t−1

. . . y

3

y

2

y

1

Figure 4.1: Illustration of the absorber for one vertexa∈Randr= 5with the path, which contains the vertexa.

So we fix a ∈ R. We want to construct the following tight path on2r−1 vertices containing a in the middle. The end tuples are x1 = (x1, . . . , xr−1) and ua = (u1, . . . , ur−1) and together with awe require that all the edges{xr−j, . . . , x1, a, u1, . . . , uj−1} are present forj = 1, . . . , r. We build this path by first choosing x1, . . . , xr−2 arbitrarily from U1. Then we expose all edges con-taining{x1, . . . , xr−2, a} to getxr−1. We continue by exposing all edges containing the(r−1)set {xr−j−1, . . . , x1, a, u1, . . . , uj−1} to getuj forj = 1, . . . , r−1. The probability that in any of these cases we fail to find a new vertex inside a subset ofU1of size at least|U1|/2is at mostn−5by Cher-noff’s inequality. A union bound over allredges and over alla∈Rreveals that with probability at mostn−3we fail to construct the small starting graph for anya.

Recall that when adding edges, we always expose all edges containing one(r−1)-tuple and then add this toE. All exposed(r−1)-tuples from this step are contained inU1∪R and none of them contains more than one vertex fromR. Furthermore we did at mostO(|R| · |U1|) =O(n2)many steps so far.

Now we want to build the absorbing structure fora. We partition each ofU2andU3into parts of sizeCp−1logn(plus perhaps a smaller left-over set). We apply Lemma 4.5 to the(r−1)-tuples←x−1 and←u−aand connect them with a spike path of even length2t+ 2in some part ofU2, witht=o(logn).

At each step we use a part ofU2in which we have so far built the least spike paths for the application of Lemma 4.5, which is necessary to control the edges ofEwithin this set. We useU2as both tuples are contained in U1and thus we have no problem with edges fromE intersecting bothU2and the end tuples. Let the spikes afterx1andua be called x2, . . . ,xtandy1, . . . ,yt respectively. The last

4.4 Proof of the Reservoir Lemma

remaining spike opposite ofua we call va. We apply Lemma 4.2 to find paths Pi connecting the tuplesxi andyi fori = 1, . . . , tin a part ofU3. Again, we choose a part ofU3 which was used for building the least connecting paths so far. We use parts ofU3for these connections because all the spikes are contained inU1∪U2and thus there are no edges ofEintersectingU3and the spikes. This finishes the absorbing structure fora. It has end-tuplesuaandva.

To finish Preswe enumerate the vertices inR increasingly a1, . . . , a|R|. Then we use Lemma 4.2 repeatedly, again at each step using a part ofU3which has been used least often previously, to connect the tuplesvai touai+1 fori= 1, . . . ,|R| −1with tight paths. Thus, we have obtained the pathPres

with end tuplesu=ua1 andv=va|R|.

The absorbing works in the following way for the structure of a single vertexa ∈ R. It relies on the fact, that the pathsPican be traversed in both directions and that we can walk from any spike to its neighbouring spikes using a tight path. The path which usesa(Figure 4.1) starts withua, goes throughatox1and then uses the pathP1toy1. From there it goes via a tight path to y2and uses P2 to go back tox2. Going fromxi via path Pi toyi and back fromyi+1through Pi+1 toxi+1 for i= 2, . . . , t−1the path ends up inva and uses all vertices. To avoida(Figure 4.2) the path starting inuagoes immediately toy1, then uses the pathP1to go tox1. Alternating as above and traversing all the pathsPiin opposite direction we again end up invaand used all vertices buta.

a u

a

v

a

P

1

P

2

. . . P

t−1

P

t

x

1

x

2

x

3

. . . x

t−1

x

t

y

t

y

t−1

. . . y

3

y

2

y

1

Figure 4.2: Illustration of the absorber for one vertexa∈Randr= 5with the path, which does not contain the vertexa.

For the proof of the lemma, it remains to check that we obtain the right probability and we are indeed able to apply Lemma 4.2 and 4.5 as we described. It is immediate from the construction, that no edges ofEare contained inR∪u∪v.

In total we are performing |R| many connections with spike-paths and |R| · t+|R| −1 many connections with tight-paths. Thus altogether we haveo p−1log2n

executions of Lemma 4.2 and Lemma 4.5. In each application we addO Cp−1lognr−2

edges toEin some part ofU2orU3. Since each part initially contains no edges ofE, provided a given part has been used at mostp−1times the total number of edges ofE in it iso Cp−1lognr−1

, and therefore we can apply Lemma 4.2 or 4.5 50

4. Finding tight Hamilton cycles in random hypergraphs

at least one more time with that part. Since|U2| and|U3|are of size linear inn, they each contain Ω pn/logn

parts. Thus, we can perform in totalΩ(n/logn) = Ω p−1log2n)applications of either Lemma 4.2 or Lemma 4.5 before all parts have been usedp−1times and thus might acquire too many edges ofE. Since we do not need to perform that many applications, we conclude that the conditions of each of Lemma 4.2 and Lemma 4.5 are met each time we apply them.

Since the connecting lemma fails with probability at mostn−5the construction of this absorber fails with probability at mostn−3. In every connection there are at mostO(nr−1)steps performed and thus we needo(nr−1p−1log2n) =O(nr)many steps for the construction of the absorber.

Chapter 5

Randomly perturbed graphs

Now we come to the paper with B ¨ottcher, Montgomery, and Person [32] and the proof32 of Theo-rem 2.7. We first give a brief outline of the steps and explain a decomposition result from Ferber, Luh, and Nguyen [54], which we will use. Then the proof of Theorem 2.7 is presented in Section 5.2, with the proofs of some auxiliary lemmas postponed to Section 5.3.

5.1 Overview of the proof

Step 1. We first obtain analmost spanning embeddingof all butεnvertices ofF, using only the edges of the random graph G(n, p). For this we adapt the strategy of Ferber, Luh, and Nguyen [54] to decomposeF, and embed it using the theorem of Riordan [97] (Theorem 2.1) together with Janson’s inequality (Theorem 2.18). A major difference to previous methods is that we do not choose which large subgraph ofF to embed, only seeking to embed somealmost spanning subgraph ofF which covers the sparser parts ofF.

Step 2. A key part in the remainder of our proof is obtaining areservoir set. This will enable us to complete the partial embedding to an embedding of all ofF. The idea behind such a reservoir set is as follows. The reservoir set will contain vertices already covered in the partial embedding ofF obtained in the first step. The properties of the reservoir allow us to reuse some of these vertices for embedding newF-vertices later in the proof, andswapthe image of F-vertices already embedded there to some other vertex inGα∪ G(n, p)that was not used in the embedding so far. For these swaps we crucially use the deterministic graphGα, and that in the first step we did not useGα but only G(n, p).

Reservoir structures of similar nature were used for embedding tight Hamilton cycles in random hypergraphs in [4], cycle-powers in random graphs in [84], bounded degree trees in random graph in [90]. However, we use the interplay of the random and deterministic graphs in a new way to create our reservoir structure.

Step 3. Using additional edges ofGαandG(n, p), we thencomplete the embeddingofF, utilising the reservoir. The approach for this completion again uses ideas from [54], relying on Janson’s inequality and the Hall-type matching argument for hypergraphs by Aharoni and Haxell [2] (Theorem 2.20).

The use of edges fromGαin this step is crucial in gaining thelog-term in comparison top.

32The proof given here is similar the on in [32].