• Keine Ergebnisse gefunden

Complexity of parameterized problems

N/A
N/A
Protected

Academic year: 2021

Aktie "Complexity of parameterized problems"

Copied!
98
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Complexity of parameterized problems

Dániel Marx

Lecture #3 May 22, 2020

(2)

Lower bounds

So far we have seen positive results: basic algorithmic techniques for fixed-parameter tractability.

What kind of negative results we have?

Can we show that a problem (e.g., Clique) isnotFPT?

Can we show that a problem (e.g., Vertex Cover) hasno algorithm with running time, say, 2o(k)·nO(1)?

This would require showing thatP6=NP: ifP=NP, then, e.g.,k-Clique is polynomial-time solvable, hence FPT.

Can we give some evidence for negative results?

(3)

Lower bounds

So far we have seen positive results: basic algorithmic techniques for fixed-parameter tractability.

What kind of negative results we have?

Can we show that a problem (e.g., Clique) isnotFPT?

Can we show that a problem (e.g., Vertex Cover) hasno algorithm with running time, say, 2o(k)·nO(1)?

This would require showing thatP6=NP: ifP=NP, then, e.g.,k-Clique is polynomial-time solvable, hence FPT.

Can we give some evidence for negative results?

(4)

Classical complexity — reminder

NP:

The class of all languages that can be recognized by a polynomial-time NTM.

The class of all languages with a witness of polynomial size

Nondeterministic Turing Machine (NTM):single tape, finite alphabet, finite state, head can move left/right only one cell. In each step, the machine can branch into an arbitrary number of directions. Run is successful if at least one branch is successful.

Polynomial-time reductionfrom problemP to problemQ: a functionφwith the following properties:

φ(x) is a yes-instance of Q ⇐⇒ x is a yes-instance of P, φ(x) can be computed in time |x|O(1).

Definition: Problem Q isNP-hard if any problem inNP can be reduced toQ.

If anNP-hard problem can be solved in polynomial time, then every problem in NPcan be solved in polynomial time (i.e.,P=NP).

(5)

Classical complexity — reminder

NP:

The class of all languages that can be recognized by a polynomial-time NTM.

The class of all languages with a witness of polynomial size

Nondeterministic Turing Machine (NTM):single tape, finite alphabet, finite state, head can move left/right only one cell. In each step, the machine can branch into an arbitrary number of directions. Run is successful if at least one branch is successful.

Polynomial-time reductionfrom problemP to problemQ: a functionφwith the following properties:

φ(x) is a yes-instance of Q ⇐⇒ x is a yes-instance of P, φ(x) can be computed in time |x|O(1).

Definition: Problem Q isNP-hard if any problem inNP can be reduced toQ.

If anNP-hard problem can be solved in polynomial time, then every problem in NPcan be solved in polynomial time (i.e.,P=NP).

(6)

Parameterized complexity

To build a complexity theory for parameterized problems, we need two concepts:

An appropriate notion of reduction.

An appropriate hypothesis.

Polynomial-time reductions are not good for our purposes.

Fact: GraphG has an independent set k ⇔G has a vertex cover of sizen−k.

Independent Set

(G,k)

Vertex Cover

(G,n−k)

This is a correct polynomial-time reduction.

However,Vertex Cover is FPT, butIndependent Set is not known to be FPT.

(7)

Parameterized complexity

To build a complexity theory for parameterized problems, we need two concepts:

An appropriate notion of reduction.

An appropriate hypothesis.

Polynomial-time reductions are not good for our purposes.

Fact: GraphG has an independent set k ⇔ G has a vertex cover of sizen−k.

Independent Set

(G,k)

Vertex Cover

(G,n−k)

This is a correct polynomial-time reduction.

However,Vertex Cover is FPT, butIndependent Set is not known to be FPT.

(8)

Parameterized reductions

Definition

Parameterized reductionfrom problemAto problem B: a functionφ with the following properties:

φ(x) is a yes-instance of B ⇐⇒ x is a yes-instance of A,

φ(x) can be computed in time f(k)· |x|O(1), where k is the parameter of x, If k is the parameter ofx andk0 is the parameter ofφ(x), thenk0 ≤g(k) for some function g.

(9)

Parameterized reductions

Definition

Parameterized reductionfrom problemAto problem B: a functionφ with the following properties:

φ(x) is a yes-instance of B ⇐⇒ x is a yes-instance of A,

φ(x) can be computed in time f(k)· |x|O(1), where k is the parameter of x, If k is the parameter ofx andk0 is the parameter ofφ(x), thenk0 ≤g(k) for some function g.

Theorem

If there is a parameterized reduction from problemAto problem B andB is FPT, then Ais also FPT.

Intuitively: ReductionA→B + algorithm forB gives and algorithm forA.

(10)

Parameterized reductions

Definition

Parameterized reductionfrom problemAto problem B: a functionφ with the following properties:

φ(x) is a yes-instance of B ⇐⇒ x is a yes-instance of A,

φ(x) can be computed in time f(k)· |x|O(1), where k is the parameter of x, If k is the parameter ofx andk0 is the parameter ofφ(x), thenk0 ≤g(k) for some function g.

Non-example: Transforming an Independent Set instance (G,k) into a Vertex Coverinstance(G,n−k) isnota parameterized reduction.

Example: Transforming anIndependent Setinstance(G,k)into aCliqueinstance

(11)

Parameterized reductions

Theorem

If there is a parameterized reduction from problemAto problem B andB is FPT, then Ais also FPT.

Proof: Suppose that

the reduction has running time f(k)nc1,

the reduction creates an instance with parameter at most g(k), and B can be solved in time h(k)nc2.

Then running the reduction an solving the created instance ofB gives an algorithm for Awith running time

f(k)nc1 +h(g(k))·(f(k)nc1)c2 ≤f0(k)nc1c2

for some functionf0.

(12)

Multicolored Clique

A useful variant ofClique:

Multicolored Clique: The vertices of the input graphG are colored withkcolors and we have to find a clique containing one vertex from each color.

(orPartitioned Clique)

V1 V2 . . . Vk

Theorem

CreateG0 by replacing each vertexv withk vertices, one in each color class. Ifu andv are adjacent in the original graph, connect all copies ofu with all copies ofv.

G G0

V1 V2 . . . Vk

v

u u1, . . . ,uk

v1, . . . ,vk

k-clique inG ⇐⇒ multicoloredk-clique in G0. Similarly: reduction toMulticolored Independent Set.

(13)

Multicolored Clique

Theorem

There is a parameterized reduction fromCliqueto Multicolored Clique. CreateG0 by replacing each vertexv withk vertices, one in each color class. Ifu andv are adjacent in the original graph, connect all copies ofu with all copies ofv.

G G0

V1 V2 . . . Vk

v

u u1, . . . ,uk

v1, . . . ,vk

k-clique inG ⇐⇒ multicoloredk-clique in G0.

Similarly: reduction toMulticolored Independent Set.

(14)

Multicolored Clique

Theorem

There is a parameterized reduction fromCliqueto Multicolored Clique. CreateG0 by replacing each vertexv withk vertices, one in each color class. Ifu andv are adjacent in the original graph, connect all copies ofu with all copies ofv.

G G0

V1 V2 . . . Vk

v

u u1, . . . ,uk

v1, . . . ,vk

(15)

Dominating Set

Theorem

There is a parameterized reduction fromMulticolored Independent Set to Dominating Set.

Proof: Let G be a graph with color classes V1,. . .,Vk. We construct a graph H such thatG has a multicoloredk-clique iffH has a dominating set of sizek.

V1

x1 y1 x2y2 xkyk

u

V2 v Vk

The dominating set has to contain one vertex from each of the k cliquesV1,. . ., Vk to dominate everyxi andyi.

For every edgee =uv, an additional vertex we ensures that these selections describe an independent set.

(16)

Dominating Set

Theorem

There is a parameterized reduction fromMulticolored Independent Set to Dominating Set.

Proof: Let G be a graph with color classes V1,. . .,Vk. We construct a graph H such thatG has a multicoloredk-clique iffH has a dominating set of sizek.

V1

x1 y1 x2y2 xkyk

u

v

we

V2 Vk

The dominating set has to contain one vertex from each of the k cliquesV ,. . .,

(17)

Variants of Dominating Set

Dominating Set: Given a graph, findk vertices that dominate every vertex.

Red-Blue Dominating Set: Given a bipartite graph, find k vertices on the red side that dominate the blue side.

Set Cover: Given a set system, find k sets whose union covers the universe.

Hitting Set: Given a set system, findk elements that intersect every set in the system.

All of these problems are equivalent under parameterized reductions, hence at least as hard asClique.

(18)

Basic hypotheses

It seems that parameterized complexity theory cannot be built on assumingP6=NP– we have to assume something stronger.

Engineers’ Hypothesis

k-Cliquecannot be solved in time f(k)·nO(1).

Theorists’ Hypothesis

k-Step Halting Problem(is there a path of the given NTM that stops in k steps?) cannot be solved in timef(k)·nO(1).

Exponential Time Hypothesis (ETH)

n-variable 3SATcannot be solved in time2o(n). Which hypothesis is the most plausible?

(19)

Basic hypotheses

It seems that parameterized complexity theory cannot be built on assumingP6=NP– we have to assume something stronger.

Engineers’ Hypothesis

k-Cliquecannot be solved in time f(k)·nO(1).

Theorists’ Hypothesis

k-Step Halting Problem(is there a path of the given NTM that stops in k steps?) cannot be solved in timef(k)·nO(1).

Exponential Time Hypothesis (ETH)

n-variable 3SATcannot be solved in time2o(n). Which hypothesis is the most plausible?

(20)

Basic hypotheses

It seems that parameterized complexity theory cannot be built on assumingP6=NP– we have to assume something stronger.

Engineers’ Hypothesis

k-Cliquecannot be solved in time f(k)·nO(1).

Theorists’ Hypothesis

k-Step Halting Problem(is there a path of the given NTM that stops in k steps?) cannot be solved in timef(k)·nO(1).

(21)

Basic hypotheses

It seems that parameterized complexity theory cannot be built on assumingP6=NP– we have to assume something stronger.

Engineers’ Hypothesis

k-Cliquecannot be solved in time f(k)·nO(1).

Theorists’ Hypothesis

k-Step Halting Problem(is there a path of the given NTM that stops in k steps?) cannot be solved in timef(k)·nO(1).

Exponential Time Hypothesis (ETH)

n-variable3SAT cannot be solved in time2o(n). Which hypothesis is the most plausible?

(22)

Summary

Independent Set andk-Step Halting Problem can be reduced to each other ⇒ Engineers’ Hypothesis and Theorists’ Hypothesis are equivalent!

Independent Set andk-Step Halting Problem can be reduced to Dominating Set.

Is there a parameterized reduction from Dominating Setto Independent Set?

Probably not. Unlike inNP-completeness, where most problems are equivalent, here we have a hierarchy of hard problems.

Independent Setis W[1]-complete. Dominating SetisW[2]-complete.

Does not matter if we only care about whether a problem is FPT or not!

(23)

Summary

Independent Set andk-Step Halting Problem can be reduced to each other ⇒ Engineers’ Hypothesis and Theorists’ Hypothesis are equivalent!

Independent Set andk-Step Halting Problem can be reduced to Dominating Set.

Is there a parameterized reduction from Dominating Setto Independent Set?

Probably not. Unlike inNP-completeness, where most problems are equivalent, here we have a hierarchy of hard problems.

Independent Setis W[1]-complete.

Dominating SetisW[2]-complete.

Does not matter if we only care about whether a problem is FPT or not!

(24)

Boolean circuit

ABoolean circuitconsists of input gates, negation gates, AND gates, OR gates, and a single output gate.

x1 x2 x3 x4 x6 x7

Circuit Satisfiability: Given a Boolean circuitC, decide if there is an assignment on the inputs ofC making the output true.

Weight of an assignment: number of true values.

Weighted Circuit Satisfiability: Given a Boolean circuit C and an integerk, decide if there is an assignment of weightk making the output true.

(25)

Boolean circuit

ABoolean circuitconsists of input gates, negation gates, AND gates, OR gates, and a single output gate.

x1 x2 x3 x4 x6 x7

Circuit Satisfiability: Given a Boolean circuitC, decide if there is an assignment on the inputs ofC making the output true.

Weight of an assignment: number of true values.

Weighted Circuit Satisfiability: Given a Boolean circuit C and an integerk, decide if there is an assignment of weightk making the output true.

(26)

Weighted Circuit Satisfiability

Independent Setcan be reduced to Weighted Circuit Satisfiability: x1 x2 x3 x4 x6 x7

Dominating Setcan be reduced toWeighted Circuit Satisfiability: x1 x2 x3 x4 x6 x7

To expressDominating Set, we need more complicated circuits.

(27)

Weighted Circuit Satisfiability

Independent Setcan be reduced to Weighted Circuit Satisfiability: x1 x2 x3 x4 x6 x7

Dominating Setcan be reduced toWeighted Circuit Satisfiability: x1 x2 x3 x4 x6 x7

To expressDominating Set, we need more complicated circuits.

(28)

Depth and weft

Thedepth of a circuit is the maximum length of a path from an input to the output.

A gate islargeif it has more than2 inputs. The weftof a circuit is the maximum number of large gates on a path from an input to the output.

Independent Set: weft 1, depth3

x2 x3 x4 x6 x7 x1

Dominating Set: weft 2, depth 2

x1 x2 x3 x4 x6 x7

(29)

The W-hierarchy

LetC[t,d] be the set of all circuits having weft at mostt and depth at mostd.

Definition

A problemP is in the class W[t]if there is a constant d and a parameterized reduction from P toWeighted Circuit Satisfiabilityof C[t,d].

We have seen thatIndependent Set is inW[1]andDominating Setis inW[2].

Fact: Independent Setis W[1]-complete.

Fact: Dominating SetisW[2]-complete.

If anyW[1]-complete problem is FPT, then FPT=W[1]andevery problem in W[1]is FPT.

If anyW[2]-complete problem is inW[1], thenW[1]=W[2].

⇒If there is a parameterized reduction fromDominating Setto Independent Set, then W[1]=W[2].

(30)

The W-hierarchy

LetC[t,d] be the set of all circuits having weft at mostt and depth at mostd.

Definition

A problemP is in the class W[t]if there is a constant d and a parameterized reduction from P toWeighted Circuit Satisfiabilityof C[t,d].

We have seen thatIndependent Set is inW[1]andDominating Setis inW[2].

Fact: Independent Setis W[1]-complete.

Fact: Dominating SetisW[2]-complete.

If anyW[1]-complete problem is FPT, thenFPT=W[1]andevery problem in W[1]is FPT.

If anyW[2]-complete problem is inW[1], thenW[1]=W[2].

(31)

Weft

Weftis a term related to weaving cloth: it is the thread that runs from side to side in 16

(32)

Parameterized reductions

TypicalNP-hardness proofs: reduction from e.g.,Clique or3SAT, representing each vertex/edge/variable/clause with a gadget.

v1 v2 v3 v4 v5 v6

C1 C2 C3 C4

Usually doesn’t work for parameterized reduction: cannot afford the parameter increase.

Types of parameterized reductions:

Reductions keeping the structure of the graph. Clique⇒Independent Set

Reductions with vertex representations.

Multicolored Independent Set⇒Dominating Set Reductions with vertex and edge representations.

(33)

Parameterized reductions

TypicalNP-hardness proofs: reduction from e.g.,Clique or3SAT, representing each vertex/edge/variable/clause with a gadget.

v1 v2 v3 v4 v5 v6

C1 C2 C3 C4

Usually doesn’t work for parameterized reduction: cannot afford the parameter increase.

Types of parameterized reductions:

Reductions keeping the structure of the graph.

Clique⇒Independent Set Reductions with vertex representations.

Multicolored Independent Set⇒Dominating Set Reductions with vertex and edge representations.

(34)

Odd Set

Odd Set: Given a set systemF over a universe U and an integerk, find a set S of at mostk elements such that|S∩F|is odd for everyF ∈ F.

Theorem

Odd Setis W[1]-hard parameterized byk.

First try: Reduction fromMulticolored Independent Set. LetU =V1∪. . .Vk

and introduce each setVi into F.

⇒The solution has to contain exactly one element from each Vi.

Ifxy ∈E(G), how can we express thatx∈Vi andy ∈Vj cannot be selected simultaneously? Seems difficult:

introducing {x,y} intoF forces that exactly oneofx andy appears in the solution,

introducing {x} ∪(Vj \ {y}) into F forces that eitherbothx andy or noneofx andy appear in the solution.

(35)

Odd Set

Theorem

Odd Setis W[1]-hard parameterized byk.

First try: Reduction fromMulticolored Independent Set. LetU =V1∪. . .Vk

and introduce each setVi into F.

⇒The solution has to contain exactly one element from each Vi.

V1 V2 V3 V4 V5

Ifxy ∈E(G), how can we express thatx∈Vi andy ∈Vj cannot be selected simultaneously?

Seems difficult:

introducing {x,y} intoF forces that exactly oneofx andy appears in the solution,

introducing {x} ∪(Vj \ {y}) into F forces that eitherbothx andy or noneofx andy appear in the solution.

(36)

Odd Set

Theorem

Odd Setis W[1]-hard parameterized byk.

First try: Reduction fromMulticolored Independent Set. LetU =V1∪. . .Vk

and introduce each setVi into F.

⇒The solution has to contain exactly one element from each Vi.

V1 V2 V3 V4 V5

x y

Ifxy ∈E(G), how can we express thatx∈Vi andy ∈Vj cannot be selected simultaneously? Seems difficult:

introducing {x,y} intoF forces that exactly oneofx andy appears in the

introducing {x} ∪(Vj \ {y}) into F forces that eitherbothx andy or noneofx andy appear in the solution.

(37)

Odd Set

Theorem

Odd Setis W[1]-hard parameterized byk.

First try: Reduction fromMulticolored Independent Set. LetU =V1∪. . .Vk

and introduce each setVi into F.

⇒The solution has to contain exactly one element from each Vi.

V1 V2 V3 V4 V5

x y

Ifxy ∈E(G), how can we express thatx∈Vi andy ∈Vj cannot be selected simultaneously? Seems difficult:

introducing {x,y} intoF forces that exactly oneofx andy appears in the solution,

introducing {x} ∪(Vj \ {y}) into F forces that eitherbothx andy or noneofx andy appear in the solution.

(38)

Odd Set

Theorem

Odd Setis W[1]-hard parameterized byk.

First try: Reduction fromMulticolored Independent Set. LetU =V1∪. . .Vk

and introduce each setVi into F.

⇒The solution has to contain exactly one element from each Vi.

V1 V2 V3 V4 V5

x y

Ifxy ∈E(G), how can we express thatx∈Vi andy ∈Vj cannot be selected simultaneously? Seems difficult:

introducing {x,y} intoF forces that exactly oneofx andy appears in the

(39)

Odd Set

Theorem

Odd Setis W[1]-hard parameterized byk.

First try: Reduction fromMulticolored Independent Set. LetU =V1∪. . .Vk

and introduce each setVi into F.

⇒The solution has to contain exactly one element from each Vi.

V1 V2 V3 V4 V5

x y

Ifxy ∈E(G), how can we express thatx∈Vi andy ∈Vj cannot be selected simultaneously? Seems difficult:

introducing {x,y} intoF forces that exactly oneofx andy appears in the solution,

introducing {x} ∪(Vj \ {y}) into F forces that eitherbothx andy or noneofx andy appear in the solution.

(40)

Odd Set

Reduction fromMulticolored Clique. U :=Sk

i=1Vi∪S

1≤i<j≤kEi,j. k0:=k+ k2

.

Let F containVi (1≤i ≤k) andEi,j (1≤i <j ≤k).

For every v ∈Vi andx6=i, we introduce the sets: (Vi\ {v})∪ {every edge from Ei,x with endpoint v} (Vi\ {v})∪ {every edge from Ex,i with endpoint v}

E1,2 E1,3 E1,4 E2,3 E2,4 E3,4

V1 V2 V3 V4

(41)

Odd Set

Reduction fromMulticolored Clique. U :=Sk

i=1Vi∪S

1≤i<j≤kEi,j. k0:=k+ k2

.

Let F containVi (1≤i ≤k) andEi,j (1≤i <j ≤k).

For every v ∈Vi andx6=i, we introduce the sets:

(Vi\ {v})∪ {every edge from Ei,x with endpointv} (Vi\ {v})∪ {every edge from Ex,i with endpointv}

E1,2 E1,3 E1,4 E2,3 E2,4 E3,4

V1 V2 V3 V4

(42)

Odd Set

Reduction fromMulticolored Clique. U :=Sk

i=1Vi∪S

1≤i<j≤kEi,j. k0:=k+ k2

.

Let F containVi (1≤i ≤k) andEi,j (1≤i <j ≤k).

For every v ∈Vi andx6=i, we introduce the sets:

(Vi\ {v})∪ {every edge from Ei,x with endpointv} (Vi\ {v})∪ {every edge from Ex,i with endpointv}

E1,2 E1,3 E1,4 E2,3 E2,4 E3,4

V1 V2 V3 V4

(43)

Odd Set

Reduction fromMulticolored Clique. U :=Sk

i=1Vi∪S

1≤i<j≤kEi,j. k0:=k+ k2

.

Let F containVi (1≤i ≤k) andEi,j (1≤i <j ≤k).

For every v ∈Vi andx6=i, we introduce the sets:

(Vi\ {v})∪ {every edge from Ei,x with endpointv} (Vi\ {v})∪ {every edge from Ex,i with endpointv}

E1,2 E1,3 E1,4 E2,3 E2,4 E3,4

V1 V2 V3 V4

(44)

Odd Set

Reduction fromMulticolored Clique.

For every v ∈Vi andx6=i, we introduce the sets:

(Vi\ {v})∪ {every edge from Ei,x with endpointv} (Vi\ {v})∪ {every edge from Ex,i with endpointv}

v ∈Vi selected ⇐⇒ edges with endpoint v are selected from Ei,x andEx,i

vi ∈Vi selected

vj ∈Vj selected ⇐⇒ edgevivj is selected in Ei,x

E1,2 E1,3 E1,4 E2,3 E2,4 E3,4

V1 V2 V3 V4

(45)

Odd Set

Reduction fromMulticolored Clique.

For every v ∈Vi andx6=i, we introduce the sets:

(Vi\ {v})∪ {every edge from Ei,x with endpointv} (Vi\ {v})∪ {every edge from Ex,i with endpointv}

v ∈Vi selected ⇐⇒ edges with endpoint v are selected from Ei,x andEx,i

vi ∈Vi selected

vj ∈Vj selected ⇐⇒ edgevivj is selected in Ei,x

E1,2 E1,3 E1,4 E2,3 E2,4 E3,4

V1 V2 V3 V4

(46)

Vertex and edge representation

Key idea

Represent the vertices of the clique by k gadgets.

Represent the edges of the clique by k2

gadgets.

Connect edge gadgetEi,j to vertex gadgetsVi andVj such that ifEi,j represents the edge between x∈Vi andy ∈Vj, then it forcesVi tox andVj to y.

(47)

Variants of Hitting Set

The following problems areW[1]-hard, with very similar proofs:

Odd Set

Exact Odd Set(find a set of size exactly k . . . ) Exact Even Set

Unique Hitting Set

(at mostk elements that hit each set exactly once) Exact Unique Hitting Set

(exactly k elements that hit each set exactly once)

A problem that is also W[1]-hard, but requires very different techniques:

Even Set: Given a set systemF and an integerk, find anonempty set S of at most k elements such|F ∩S|is even for everyF ∈ F.

(48)

Variants of Hitting Set

The following problems areW[1]-hard, with very similar proofs:

Odd Set

Exact Odd Set(find a set of size exactly k . . . ) Exact Even Set

Unique Hitting Set

(at mostk elements that hit each set exactly once) Exact Unique Hitting Set

(exactly k elements that hit each set exactly once)

A problem that is also W[1]-hard, but requires very different techniques:

Even Set: Given a set systemF and an integerk, find anonempty set S of at

(49)

Summary

By parameterized reductions, we can show that lots of parameterized problems are at least as hard as Clique, hence unlikely to be fixed-parameter tractable.

Connection with Turing machines gives some supporting evidence for hardness (only of theoretical interest).

TheW-hierarchy classifies the problems according to hardness (only of theoretical interest).

Important trick inW[1]-hardness proofs: vertex and edge representations.

(50)

Shift of focus

FPT or W[1]-hard?

qualitative question

(51)

Shift of focus

FPT or W[1]-hard?

What is the best possible multiplierf(k) in the running timef(k)·nO(1)?

What is the best possible exponentg(k) in the running timef(k)·ng(k)? FPT

W[1]-ha rd

quantitative questionqualitative question

2k? 1.0001k? 2

k? nO(k)? nlogk? nlog logk?

(52)

Better algorithms for Vertex Cover

We have seen a 2k·nO(1) time algorithm.

Easy to improve to, e.g., 1.618k ·nO(1). Current bestf(k): 1.2738k ·nO(1). Lower bounds?

Is, say,1.001k ·nO(1) time possible?

Is2k/logk·nO(1) time possible?

Of course, for all we know, it is possible thatP=NP andVertex Coveris polynomial-time solvable.

⇒We can hope only for conditional lower bounds.

(53)

Better algorithms for Vertex Cover

We have seen a 2k·nO(1) time algorithm.

Easy to improve to, e.g., 1.618k ·nO(1). Current bestf(k): 1.2738k ·nO(1). Lower bounds?

Is, say,1.001k ·nO(1) time possible?

Is2k/logk·nO(1) time possible?

Of course, for all we know, it is possible thatP=NP andVertex Coveris polynomial-time solvable.

⇒We can hope only for conditional lower bounds.

(54)

Exponential Time Hypothesis (ETH)

3CNF:φis a conjuction of clauses, where each clause is a disjunction of at most 3 literals (= a variable or its negation), e.g.,(x1∨x3∨x¯4)∧(¯x2∨x¯3)∨(x1∨x2∨x4).

3SAT: given a 3CNF formulaφ with n variables and m clauses, decide whetherφ is satisfiable.

Current best algorithm is1.30704n [Hertli 2011]. Can we dosignificantly better, e.g,2O(n/logn)?

Hypothesis introduced by Impagliazzo, Paturi, and Zane in 2001:

(55)

Exponential Time Hypothesis (ETH)

3CNF:φis a conjuction of clauses, where each clause is a disjunction of at most 3 literals (= a variable or its negation), e.g.,(x1∨x3∨x¯4)∧(¯x2∨x¯3)∨(x1∨x2∨x4).

3SAT: given a 3CNF formulaφ with n variables and m clauses, decide whetherφ is satisfiable.

Current best algorithm is1.30704n [Hertli 2011]. Can we dosignificantly better, e.g,2O(n/logn)?

Hypothesis introduced by Impagliazzo, Paturi, and Zane in 2001:

Exponential Time Hypothesis (ETH)

[consequence of]

There is no2o(n)-time algorithm for n-variable3SAT.

(56)

Exponential Time Hypothesis (ETH)

3CNF:φis a conjuction of clauses, where each clause is a disjunction of at most 3 literals (= a variable or its negation), e.g.,(x1∨x3∨x¯4)∧(¯x2∨x¯3)∨(x1∨x2∨x4).

3SAT: given a 3CNF formulaφ with n variables and m clauses, decide whetherφ is satisfiable.

Current best algorithm is1.30704n [Hertli 2011]. Can we dosignificantly better, e.g,2O(n/logn)?

Hypothesis introduced by Impagliazzo, Paturi, and Zane in 2001:

Exponential Time Hypothesis (ETH)

[real statement]

There is a constantδ >0 such that there is noO(2δn) time algorithm for3SAT.

(57)

Sparsification

Exponential Time Hypothesis (ETH)

[consequence of]

There is no2o(n)-time algorithm for n-variable3SAT.

Observe: an n-variable3SAT formula can have m= Ω(n3) clauses.

Are there algorithms that are subexponential in the sizen+m of the3SAT formula?

Sparsification Lemma

There is a 2o(n)-time algorithm for n-variable 3SAT. m

There is a 2o(n+m)-time algorithm forn-variablem-clause3SAT. Intuitively: When considering a hard3SAT instance, we can assume that it has m=O(n) clauses.

(58)

Sparsification

Exponential Time Hypothesis (ETH)

[consequence of]

There is no2o(n)-time algorithm for n-variable3SAT.

Observe: an n-variable3SAT formula can have m= Ω(n3) clauses.

Are there algorithms that are subexponential in the sizen+m of the3SAT formula?

Sparsification Lemma

There is a2o(n)-time algorithm for n-variable 3SAT. m

There is a 2o(n+m)-time algorithm forn-variablem-clause3SAT. Intuitively: When considering a hard instance, we can assume that it has

(59)

Lower bounds based on ETH

Exponential Time Hypothesis (ETH) + Sparsification Lemma

There is no2o(n+m)-time algorithm forn-variablem-clause 3SAT. The textbook reduction from3SAT toVertex Cover:

x11 x22 x3 ¯x3 x44

(60)

Lower bounds based on ETH

Exponential Time Hypothesis (ETH) + Sparsification Lemma

There is no2o(n+m)-time algorithm forn-variablem-clause 3SAT. The textbook reduction from3SAT toVertex Cover:

formula is satisfiable⇔ there is a vertex cover of sizen+2m x11 x22 x3 ¯x3 x44

x1∨x¯2∨x3

(61)

Lower bounds based on ETH

Exponential Time Hypothesis (ETH) + Sparsification Lemma

There is no2o(n+m)-time algorithm forn-variablem-clause 3SAT. The textbook reduction from3SAT toVertex Cover:

3SAT formulaφ n variables

m clauses

GraphG O(n+m) vertices

O(n+m) edges v1 v2 v3 v4 v5 v6

C1 C2 C3 C4

(62)

Lower bounds based on ETH

Exponential Time Hypothesis (ETH) + Sparsification Lemma

There is no2o(n+m)-time algorithm forn-variablem-clause 3SAT. The textbook reduction from3SAT toVertex Cover:

3SAT formulaφ n variables

m clauses

GraphG O(n+m) vertices

O(n+m) edges v1 v2 v3 v4 v5 v6

C1 C2 C3 C4

(63)

Lower bounds based on ETH

Exponential Time Hypothesis (ETH) + Sparsification Lemma

There is no2o(n+m)-time algorithm forn-variablem-clause 3SAT. The textbook reduction from3SAT toVertex Cover:

3SAT formulaφ n variables

m clauses

GraphG O(n+m) vertices

O(n+m) edges v1 v2 v3 v4 v5 v6

C1 C2 C3 C4

Corollary

Assuming ETH, there is no2o(k)·nO(1) algorithm forVertex Cover.

(64)

Other problems

There are polytime reductions from3SATto many problems such that the reduction creates a graph withO(n+m) vertices/edges.

Consequence: Assuming ETH, the following problems cannot be solved in time2o(n) and hence in time2o(k)·nO(1) (but 2O(k)·nO(1) time algorithms are known):

Vertex Cover Longest Cycle

Feedback Vertex Set Multiway Cut

Odd Cycle Transversal Steiner Tree

. . .

(65)

The race for better FPT algorithms

Single exponential Subexponential

Double exponential

"Slightly super- exponential"

Tower of exponentials

(66)

Edge Clique Cover

Edge Clique Cover: Given a graph G and an integer k, cover the edges of G with at mostk cliques.

(the cliques need not be edge disjoint) Equivalently: can G be represented as an intersection graph over ak element universe?

(67)

Edge Clique Cover

Edge Clique Cover: Given a graph G and an integer k, cover the edges of G with at mostk cliques.

(the cliques need not be edge disjoint) Equivalently: can G be represented as an intersection graph over ak element universe?

6cliques

(68)

Edge Clique Cover

Edge Clique Cover: Given a graph G and an integer k, cover the edges of G with at mostk cliques.

(the cliques need not be edge disjoint) Equivalently: can G be represented as an intersection graph over ak element universe?

(69)

Edge Clique Cover

Edge Clique Cover: Given a graph G and an integer k, cover the edges of G with at mostk cliques.

(the cliques need not be edge disjoint)

Simple algorithm (sketch)

If two adjacent vertices have the same neighborhood (“twins”), then remove one of them.

If there are no twins and isolated vertices, then |V(G)|>2k implies that there is no solution.

Use brute force.

Running time: 22O(k)·nO(1) — double exponential dependence onk!

(70)

Edge Clique Cover

Edge Clique Cover: Given a graph G and an integer k, cover the edges of G with at mostk cliques.

(the cliques need not be edge disjoint)

Double-exponential dependence onk cannot be avoided!

Theorem

Assuming ETH, there is no22o(k)·nO(1) time algorithm forEdge Clique Cover. Proof:

3SAT

Edge Clique Cover

(71)

The race for better FPT algorithms

Single exponential Subexponential

Double exponential

"Slightly super- exponential"

Tower of exponentials

(72)

Slightly superexponential algorithms

Running time of the form2O(klogk)·nO(1) appear naturally in parameterized algorithms usually because of one of two reasons:

1 Branching into k directions at most k times explores a search tree of size kk =2O(klogk).

Example: Feedback Vertex Setin the first lecture.

2 Trying k! =2O(klogk) permutations of k elements (or partitions, matchings,. . .) Can we avoid these steps and obtain2O(k)·nO(1) time algorithms?

(73)

Closest String

Closest String

Given stringss1, . . .,sk of lengthL over alphabet Σ, and an integerd, find a string s (of lengthL) such that Hamming distance d(s,si)≤d for every 1≤i ≤k. (Hamming distance: number of differing positions)

s1 C B D C C A C B B s2 A B D B C A B D B s3 C D D B A C C B D s4 D D A B A C C B D s5 A C D B D D C B C

We can ask for running time for example f(d)nO(1): FPT parameterized byd

f(k,|Σ|)nO(1): FPT with combined parameters k and|Σ|

(74)

Closest String

Closest String

Given stringss1, . . .,sk of lengthL over alphabet Σ, and an integerd, find a string s (of lengthL) such that Hamming distance d(s,si)≤d for every 1≤i ≤k. (Hamming distance: number of differing positions)

s1 C B D C C A C B B s2 A B D B C A B D B s3 C D D B A C C B D s4 D D A B A C C B D s5 A C D B D D C B C A D D B C A C B D

We can ask for running time for example f(d)nO(1): FPT parameterized byd

f(k,|Σ|)nO(1): FPT with combined parameters k and|Σ|

(75)

Closest String

Closest String

Given stringss1, . . .,sk of lengthL over alphabet Σ, and an integerd, find a string s (of lengthL) such that Hamming distance d(s,si)≤d for every 1≤i ≤k. (Hamming distance: number of differing positions)

s1 C B D C C A C B B s2 A B D B C A B D B s3 C D D B A C C B D s4 D D A B A C C B D s5 A C D B D D C B C A D D B C A C B D

Different parameters:

Number k of strings.

LengthL of strings Maximum distanced. Alphabet size|Σ|.

We can ask for running time for example f(d)nO(1): FPT parameterized byd

f(k,|Σ|)nO(1): FPT with combined parameters k and|Σ|

(76)

Closest String

Closest String

Given stringss1, . . .,sk of lengthL over alphabet Σ, and an integerd, find a string s (of lengthL) such that Hamming distance d(s,si)≤d for every 1≤i ≤k. (Hamming distance: number of differing positions)

s1 C B D C C A C B B s2 A B D B C A B D B s3 C D D B A C C B D s4 D D A B A C C B D s5 A C D B D D C B C A D D B C A C B D

Different parameters:

Number k of strings.

LengthL of strings Maximum distanced. Alphabet size|Σ|.

We can ask for running time for example

(77)

Closest String

Theorem

Closest Stringcan be solved in time 2O(dlogd)nO(1).

Main idea: Given a stringy at Hamming distance`from some solution, we use branching to find a string at distance at most `−1from some solution.

Initially, y =x1 is at distance at most d from some solution.

If y is not a solution, then there is anxi withd(y,xi)≥d +1.

Look at the firstd+1positionspwherexi[p]6=y[p]. For every solutionz, it is true for one suchp thatxi[p] =z[p].

Branch on choosing one of thesed+1positions and replacey[p] withxi[p]: distance ofy from solutionz decreases to`−1.

Running time (d +1)d·nO(1)=2O(dlogd)nO(1).

(78)

Closest String

Theorem

Closest Stringcan be solved in time 2O(dlogd)nO(1).

Main idea: Given a stringy at Hamming distance`from some solution, we use branching to find a string at distance at most `−1from some solution.

Initially, y =x1 is at distance at most d from some solution.

If y is not a solution, then there is anxi withd(y,xi)≥d +1.

Look at the firstd+1positionspwherexi[p]6=y[p]. For every solutionz, it is true for one suchp thatxi[p] =z[p].

Branch on choosing one of thesed+1positions and replacey[p] withxi[p]:

distance ofy from solutionz decreases to`−1.

(79)

Closest String

Theorem

Assuming ETH,Closest Stringhas no2o(dlogd)nO(1) algorithm.

Proof:

3SAT

O(dlogd) variables

Closest String distanced

(80)

Shift of focus

FPT or W[1]-hard?

What is the best possible multiplierf(k) in the running timef(k)·nO(1)?

What is the best possible exponentg(k) in the running timef(k)·ng(k)? FPT

W[1]-ha rd

questionqualitative question

(81)

Better algorithms for W[1]-hard problems

O(nk) algorithm fork-Cliqueby brute force.

O(n0.79k) algorithms using fast matrix multiplication.

W[1]-hardness of k-Cliquegives evidence that there is no f(k)·nO(1) time algorithm.

But what about improvements of the exponent O(k)?

n

k

nk/log logk nlogk

n

k

22k·nlog log logk

Theorem

Assuming ETH,k-Cliquehas nof(k)·no(k) algorithm for any computable function f. In particular, ETH implies thatk-Cliqueis not FPT.

(82)

Better algorithms for W[1]-hard problems

O(nk) algorithm fork-Cliqueby brute force.

O(n0.79k) algorithms using fast matrix multiplication.

W[1]-hardness of k-Cliquegives evidence that there is no f(k)·nO(1) time algorithm.

But what about improvements of the exponent O(k)?

nlog loglogk

Theorem

Assuming ETH,k-Cliquehas nof(k)·no(k) algorithm for any computable function f.

(83)

Basic hypotheses

Engineers’ Hypothesis

k-Cliquecannot be solved in time f(k)·nO(1).

Theorists’ Hypothesis

k-Step Halting Problem(is there a path of the given NTM that stops in k steps?) cannot be solved in timef(k)·nO(1).

Exponential Time Hypothesis (ETH)

n-variable3SAT cannot be solved in time2o(n).

(84)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

Textbook reduction from 3SAT to 3-Coloring shows that, assuming ETH, there is no2o(n) time algorithm for 3-Coloringon ann-vertex graph. Then

3-coloring

n vertices

(GClique,k) with N≈3n/k vertices

(85)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

≤3n/k n/k

k k

Create a vertex per each consistent coloring of each group.

(86)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

≤3n/k n/k

(87)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

≤3n/k n/k

k k

Create a vertex per each consistent coloring of each group.

(88)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

≤3n/k n/k

(89)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

n/k

k

≤3n/k

k

!

Connect two vertices if they represent colorings that are consistent together.

(90)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

≤3n/k n/k

(91)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

We have constructed a new graph withN =k·3n/k vertices that has ak-clique if and only if the original graph is 3-colorable.

Suppose that k-Clique has a2k ·No(k) time algorithm.

Doing the reduction with k:= logn gives us an algorithm for3-Coloring with running time

2k ·No(k) =n·(logn)o(logn)·3n·o(logn)/logn=2o(n).

Choosing k := log logn would rule out a22k·No(k) algorithm etc.

In general, we need to choose roughly k :=f−1(n)groups (technicalities omitted).

(92)

Lower bound for k -Clique

Theorem

Assuming ETH,k-Cliquehas nof(k)·No(k) algorithm for any computable functionf. Proof:

We have constructed a new graph withN =k·3n/k vertices that has ak-clique if and only if the original graph is 3-colorable.

Suppose that k-Clique has a2k ·No(k) time algorithm.

Doing the reduction with k:= logn gives us an algorithm for3-Coloring with running time

2k ·No(k) =n·(logn)o(logn)·3n·o(logn)/logn=2o(n).

(93)

Tight bounds

Theorem

Assuming ETH,k-Cliquehas nof(k)·no(k) algorithm for any computable function f. Transfering to other problems:

k-Clique

(x,k) ⇒ Problem A

(x0,O(k))

f(k)·no(k)

algorithm ⇐ f(k)·no(k)) algorithm

Bottom line:

To rule out f(k)·no(k) algorithms, we need a parameterized reduction that blows up the parameter at most linearly.

To rule outf(k)·no(

k)algorithms, we need a parameterized reduction that blows up the parameter at most quadratically.

(94)

Tight bounds

Theorem

Assuming ETH,k-Cliquehas nof(k)·no(k) algorithm for any computable function f. Transfering to other problems:

k-Clique

(x,k) ⇒ Problem A

(x0,k2)

f(k)·no(k)

algorithm ⇐ f(k)·no(

k)

algorithm

Bottom line:

To rule out f(k)·no(k) algorithms, we need a parameterized reduction that blows up the parameter at most linearly.

To rule outf(k)·no(

k)algorithms, we need a parameterized reduction that blows up the parameter at most quadratically.

(95)

Tight bounds

Theorem

Assuming ETH,k-Cliquehas nof(k)·no(k) algorithm for any computable function f. Transfering to other problems:

k-Clique

(x,k) ⇒ Problem A

(x0,g(k))

f(k)·no(k)

algorithm ⇐ f(k)·no(g−1(k)) algorithm

Bottom line:

To rule out f(k)·no(k) algorithms, we need a parameterized reduction that blows up the parameter at most linearly.

To rule outf(k)·no(

k)algorithms, we need a parameterized reduction that blows up the parameter at most quadratically.

(96)

Tight bounds

Theorem

Assuming ETH,k-Cliquehas nof(k)·no(k) algorithm for any computable function f. Transfering to other problems:

k-Clique

(x,k) ⇒ Problem A

(x0,g(k))

f(k)·no(k)

algorithm ⇐ f(k)·no(g−1(k)) algorithm Bottom line:

To rule out f(k)·no(k) algorithms, we need a parameterized reduction that blows

(97)

Tight bounds

Assuming ETH, there is nof(k)no(k) time algorithms for Set Cover

Hitting Set

Connected Dominating Set Independent Dominating Set Partial Vertex Cover

Dominating Setin bipartite graphs . . .

(98)

Summary

Parameterized reductions from Cliqueor Independent Set can give evidence that a problem is not FPT.

ETH can give tight bounds on the f(k) for FPT problems.

ETH can give tight bounds on the exponent of n for W[1]-hard problems.

Referenzen

ÄHNLICHE DOKUMENTE

The exponent of z represents the crank of partitions of a positive integral value of n and also shows that the sum of weights of corresponding partitions of n is

Our main contribution is to demonstrate and resolve this issue by showing that iterative application of any permutation algorithm, whose corresponding permutation matrix is

Das am linken Rand unten vorstehende kleine Schnipsel ist ähnlich zu den beiden Drei- ecken und damit ebenfalls ein pythagoreisches Dreieck?. 4

Für ein Dreieck gibt es nur eine Triangulation, nämlich das Dreieck selber... 2: Viereck Beim Fünfeck gibt es

infer rewrite lemmas that represent families of rewrite sequences (RTA 15) detect decreasing loops (JAR 17).. ⇒ much more efficient and applicable to

joint work with Florian Frohn, Jera Hensel, Cornelius Aschermann, and Thomas

Dabei gilt das erste Gleichheitszeichen aufgrund der Definition von n+1, das zweite ist die Rekursionsformel in der Definition der Multiplikation, beim dritten wird

We provide the median gaps to the best primal bound of the original problem (gap) and the median computation times in seconds (t).. Missing entries (“-”) indicate that the