• Keine Ergebnisse gefunden

Worst Case and Probabilistic Analysis of the 2-Opt Algorithm for the TSP

N/A
N/A
Protected

Academic year: 2022

Aktie "Worst Case and Probabilistic Analysis of the 2-Opt Algorithm for the TSP"

Copied!
51
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Worst Case and Probabilistic Analysis of the 2-Opt Algorithm for the TSP

Matthias Englert Heiko R¨ oglin Berthold V¨ ocking

Department of Computer Science RWTH Aachen

Oberwolfach 2007

(2)

Traveling Salesperson Problem

Traveling Salesperson Problem (TSP)

Input: weighted (complete) graph G = (V , E, d ) with d : E → R .

Goal: Find Hamiltonian cycle of

minimum length.

(3)

Traveling Salesperson Problem

Traveling Salesperson Problem (TSP)

Input: weighted (complete) graph

G = (V , E, d ) with d : E → R .

Goal: Find Hamiltonian cycle of

minimum length.

(4)

Theoretical Results

General TSP

I

Strongly NP-hard.

I

Not approximable within any polynomial factor.

a b

c a + b≥c

Metric TSP

I

Strongly NP-hard.

I

3/2-approximation [Christofides, (1976)]

I

APX-hard: lower bound of 220/219 [Papadimitriou, Vempala (2000)] (x

1

, y

1

) (x

2

, y

2

)

p (x

1

-x

2

)

2

+(y

1

-y

2

)

2

d(P

1

, P

2

) =

Euclidean TSP

I

Cities ⊂ R

d

I

Strongly NP-hard (⇒ no FPTAS) [Papadimitriou (1977)]

I

PTAS exists [Arora (1996), Mitchell

(1996)].

(5)

Theoretical Results

General TSP

I

Strongly NP-hard.

I

Not approximable within any polynomial factor.

a b

c a + b≥c

Metric TSP

I

Strongly NP-hard.

I

3/2-approximation [Christofides, (1976)]

I

APX-hard: lower bound of 220/219 [Papadimitriou, Vempala (2000)]

(x

1

, y

1

) (x

2

, y

2

)

p (x

1

-x

2

)

2

+(y

1

-y

2

)

2

d(P

1

, P

2

) =

Euclidean TSP

I

Cities ⊂ R

d

I

Strongly NP-hard (⇒ no FPTAS) [Papadimitriou (1977)]

I

PTAS exists [Arora (1996), Mitchell

(1996)].

(6)

Theoretical Results

General TSP

I

Strongly NP-hard.

I

Not approximable within any polynomial factor.

a b

c a + b≥c

Metric TSP

I

Strongly NP-hard.

I

3/2-approximation [Christofides, (1976)]

I

APX-hard: lower bound of 220/219 [Papadimitriou, Vempala (2000)]

(x

1

, y

1

) (x

2

, y

2

)

p (x

1

-x

2

)

2

+(y

1

-y

2

)

2

d(P

1

, P

2

) =

Euclidean TSP

I

Cities ⊂ R

d

I

Strongly NP-hard (⇒ no FPTAS) [Papadimitriou (1977)]

I

PTAS exists [Arora (1996), Mitchell

(1996)].

(7)

Experimental Results

Numerous experimental studies.

I

TSPLIB contains “real-world” and random (Euclidean) instances.

I

DIMACS Implementation Challenge [Johnson and McGeoch (2002)].

Some conclusions:

Worst-case results are often too pessimistic. The PTAS is too slow on large scale instances.

The most successful algorithms (w. r. t. quality and running time) in

practice rely on local search.

(8)

Experimental Results

Numerous experimental studies.

I

TSPLIB contains “real-world” and random (Euclidean) instances.

I

DIMACS Implementation Challenge [Johnson and McGeoch (2002)].

Some conclusions:

Worst-case results are often too pessimistic.

The PTAS is too slow on large scale instances.

The most successful algorithms (w. r. t. quality and running time) in

practice rely on local search.

(9)

2-Opt Heuristic

1

Start with an arbitrary tour.

2

Remove two edges from the tour.

3

Complete the tour by two other edges.

4

Repeat steps 2 and 3 until no local improvement is possible anymore.

(10)

2-Opt Heuristic

1

Start with an arbitrary tour.

2

Remove two edges from the tour.

3

Complete the tour by two other edges.

4

Repeat steps 2 and 3 until no local improvement is possible anymore.

(11)

2-Opt Heuristic

1

Start with an arbitrary tour.

2

Remove two edges from the tour.

3

Complete the tour by two other edges.

4

Repeat steps 2 and 3 until no local improvement is possible anymore.

(12)

2-Opt Heuristic

1

Start with an arbitrary tour.

2

Remove two edges from the tour.

3

Complete the tour by two other edges.

4

Repeat steps 2 and 3 until no local improvement is possible anymore.

(13)

Why 2-Opt?

Experiments on Random Euclidean Instances

[Johnson and McGeoch (2002)]

Approximation Ratio

Christofides (for n ≤ 10 5 ): ≈ 1.1 2-Opt (for n ≤ 10 6 ): ≈ 1.05

Number of Local Improvements of 2-Opt Greedy Starts: Probably O(n)

Random Starts: Probably O(n log n)

(14)

Why 2-Opt?

Experiments on Random Euclidean Instances

[Johnson and McGeoch (2002)]

Approximation Ratio

Christofides (for n ≤ 10 5 ): ≈ 1.1 2-Opt (for n ≤ 10 6 ): ≈ 1.05

Number of Local Improvements of 2-Opt Greedy Starts: Probably O(n)

Random Starts: Probably O(n log n)

(15)

Running time of 2-Opt: Known and New Results

General TSP Euclidean metric Manhattan metric

average

smoothed

worst-case 2 Ω(n)

Average-case results: [Chandra, Karloff, Tovey (1999)].

Worst-case results: [Lueker (1975)].

(16)

Running time of 2-Opt: Known and New Results

General TSP Euclidean metric Manhattan metric

average

smoothed

worst-case 2 Ω(n) 2 Ω(n) 2 Ω(n)

Average-case results: [Chandra, Karloff, Tovey (1999)].

Worst-case results: [Lueker (1975)].

Our results.

(17)

Running time of 2-Opt: Known and New Results

General TSP Euclidean metric Manhattan metric O(n ˜ 10 ) O ˜ (n 6 ) average

smoothed

worst-case 2 Ω(n) 2 Ω(n) 2 Ω(n)

Average-case results: [Chandra, Karloff, Tovey (1999)].

Worst-case results: [Lueker (1975)].

Our results.

(18)

Running time of 2-Opt: Known and New Results

General TSP Euclidean metric Manhattan metric O(n ˜ 10 ) O(n ˜ 6 ) average

n 3+o(1) O(n ˜ 4.33 ) [ O ˜ (n 3.83 )] O(n ˜ 4 ) [ O(n ˜ 3.5 )]

smoothed

worst-case 2 Ω(n) 2 Ω(n) 2 Ω(n)

Average-case results: [Chandra, Karloff, Tovey (1999)].

Worst-case results: [Lueker (1975)].

Our results.

(19)

Lower Bound

1 Introduction

2 Lower Bound

3 Upper Bound

4 Extensions and Open Problems

(20)

Lower Bound

Theorem

For every n ∈ N , there is a graph in the Euclidean plane with 8n vertices on which 2-Opt can make 2 n+3 − 14 steps.

Gadget G

1

Gadget G

n

Possible States of a Gadget:

(Long,Long), (Long,Short), (Short,Long), (Short,Short)

(21)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(22)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0 2

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(23)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0

1

2

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(24)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0

1 0

2 Trigger

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(25)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0

1 0

1

2 2 Trigger

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(26)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0

1

2

0

1

2 2 Trigger

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(27)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0

1

2

0

1

2 2 Trigger

0 Trigger

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(28)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0

1

2

0

1

2 2 Trigger

0 Trigger

1

2

Gadget G i is reset 2 i −1 times to (Long,Long) = 0.

(29)

Lower Bound

(Long,Long) = 0

(Long,Short) = 1

(Short,Short) = 2

0

1

2

0

1

2 2 Trigger

0 Trigger

1

2

(30)

Euclidean Embedding of the Gadgets

(31)

Upper Bound

1 Introduction

2 Lower Bound

3 Upper Bound

4 Extensions and Open Problems

(32)

Upper Bound

Theorem

Assume that n points are placed independently, uniformly at random in

the unit square [0, 1] 2 . The expected number of 2-Opt steps is bounded by

O (n 4+1/3 · log n) (for every initial tour and every pivot rule).

(33)

Simple Polynomial Bound

Theorem

The expected number of 2-Opt steps is bounded by O (n 7 log 2 n).

Proof.

Consider a 2-Opt step (e 1 , e 2 ) → (e 3 , e 4 ).

∆(e 1 , e 2 , e 3 , e 4 ) = l(e 1 ) + l (e 2 ) − l(e 3 ) − l (e 4 ).

∆ = min

e

1

,e

2

,e

3

,e

4

|∆(e 1 , e 2 , e 3 , e 4 )|.

# 2-Opt Steps ≤

√ 2n

∆ .

Bound ∆ by a union bound: There are O(n 4 ) different 2-Opt steps,

analyze ∆(e 1 , e 2 , e 3 , e 4 ) for one of them. ⇒ ∆ ≈ 1/(n 4 log n).

(34)

Simple Polynomial Bound

Theorem

The expected number of 2-Opt steps is bounded by O (n 7 log 2 n).

Proof.

Consider a 2-Opt step (e 1 , e 2 ) → (e 3 , e 4 ).

∆(e 1 , e 2 , e 3 , e 4 ) = l(e 1 ) + l (e 2 ) − l(e 3 ) − l (e 4 ).

∆ = min

e

1

,e

2

,e

3

,e

4

|∆(e 1 , e 2 , e 3 , e 4 )|.

# 2-Opt Steps ≤

√ 2n

∆ .

Bound ∆ by a union bound: There are O(n 4 ) different 2-Opt steps,

analyze ∆(e 1 , e 2 , e 3 , e 4 ) for one of them. ⇒ ∆ ≈ 1/(n 4 log n).

(35)

Simple Polynomial Bound

Theorem

The expected number of 2-Opt steps is bounded by O (n 7 log 2 n).

Proof.

Consider a 2-Opt step (e 1 , e 2 ) → (e 3 , e 4 ).

∆(e 1 , e 2 , e 3 , e 4 ) = l(e 1 ) + l (e 2 ) − l(e 3 ) − l (e 4 ).

∆ = min

e

1

,e

2

,e

3

,e

4

|∆(e 1 , e 2 , e 3 , e 4 )|.

# 2-Opt Steps ≤

√ 2n

∆ .

Bound ∆ by a union bound: There are O(n 4 ) different 2-Opt steps,

analyze ∆(e 1 , e 2 , e 3 , e 4 ) for one of them. ⇒ ∆ ≈ 1/(n 4 log n).

(36)

Simple Polynomial Bound

Theorem

The expected number of 2-Opt steps is bounded by O (n 7 log 2 n).

Proof.

Consider a 2-Opt step (e 1 , e 2 ) → (e 3 , e 4 ).

∆(e 1 , e 2 , e 3 , e 4 ) = l(e 1 ) + l (e 2 ) − l(e 3 ) − l (e 4 ).

∆ = min

e

1

,e

2

,e

3

,e

4

|∆(e 1 , e 2 , e 3 , e 4 )|.

# 2-Opt Steps ≤

√ 2n

∆ .

Bound ∆ by a union bound: There are O(n 4 ) different 2-Opt steps,

analyze ∆(e 1 , e 2 , e 3 , e 4 ) for one of them. ⇒ ∆ ≈ 1/(n 4 log n).

(37)

Simple Polynomial Bound

Theorem

The expected number of 2-Opt steps is bounded by O (n 7 log 2 n).

Proof.

Consider a 2-Opt step (e 1 , e 2 ) → (e 3 , e 4 ).

∆(e 1 , e 2 , e 3 , e 4 ) = l(e 1 ) + l (e 2 ) − l(e 3 ) − l (e 4 ).

∆ = min

e

1

,e

2

,e

3

,e

4

|∆(e 1 , e 2 , e 3 , e 4 )|.

# 2-Opt Steps ≤

√ 2n

∆ .

Bound ∆ by a union bound: There are O(n 4 ) different 2-Opt steps,

analyze ∆(e 1 , e 2 , e 3 , e 4 ) for one of them. ⇒ ∆ ≈ 1/(n 4 log n).

(38)

Idea for Improvement

The bound is too pessimistic: Not every step yields the smallest possible improvement ∆ ≈ 1/(n 4 log n).

Consider two consecutive steps: They yield ∆ + ∆ 2 > 2∆. Consider linked pair: (e 1 , e 2 ) → (e 3 , e 4 ) and (e 3 , e 5 ) → (e 6 , e 7 ). Sequence of t consecutive steps, contains Ω(t) linked pairs:

S

5

S

2

S

3

S

4

S

6

S

7

S

8

S

1

S

9

(S

1

, S

4

) (S

2

, S

5

) (S

6

, S

9

)

∆ Linked ≈ 1/(n 3+1/3 log 2/3 n).

(39)

Idea for Improvement

The bound is too pessimistic: Not every step yields the smallest possible improvement ∆ ≈ 1/(n 4 log n).

Consider two consecutive steps: They yield ∆ + ∆ 2 > 2∆.

Consider linked pair: (e 1 , e 2 ) → (e 3 , e 4 ) and (e 3 , e 5 ) → (e 6 , e 7 ). Sequence of t consecutive steps, contains Ω(t) linked pairs:

S

5

S

2

S

3

S

4

S

6

S

7

S

8

S

1

S

9

(S

1

, S

4

) (S

2

, S

5

) (S

6

, S

9

)

∆ Linked ≈ 1/(n 3+1/3 log 2/3 n).

(40)

Idea for Improvement

The bound is too pessimistic: Not every step yields the smallest possible improvement ∆ ≈ 1/(n 4 log n).

Consider two consecutive steps: They yield ∆ + ∆ 2 > 2∆.

Consider linked pair: (e 1 , e 2 ) → (e 3 , e 4 ) and (e 3 , e 5 ) → (e 6 , e 7 ).

Sequence of t consecutive steps, contains Ω(t) linked pairs: S

5

S

2

S

3

S

4

S

6

S

7

S

8

S

1

S

9

(S

1

, S

4

) (S

2

, S

5

) (S

6

, S

9

)

∆ Linked ≈ 1/(n 3+1/3 log 2/3 n).

(41)

Idea for Improvement

The bound is too pessimistic: Not every step yields the smallest possible improvement ∆ ≈ 1/(n 4 log n).

Consider two consecutive steps: They yield ∆ + ∆ 2 > 2∆.

Consider linked pair: (e 1 , e 2 ) → (e 3 , e 4 ) and (e 3 , e 5 ) → (e 6 , e 7 ).

Sequence of t consecutive steps, contains Ω(t) linked pairs:

S

5

S

2

S

3

S

4

S

6

S

7

S

8

S

1

S

9

(S

1

, S

4

) (S

2

, S

5

) (S

6

, S

9

)

∆ Linked ≈ 1/(n 3+1/3 log 2/3 n).

(42)

Idea for Improvement

The bound is too pessimistic: Not every step yields the smallest possible improvement ∆ ≈ 1/(n 4 log n).

Consider two consecutive steps: They yield ∆ + ∆ 2 > 2∆.

Consider linked pair: (e 1 , e 2 ) → (e 3 , e 4 ) and (e 3 , e 5 ) → (e 6 , e 7 ).

Sequence of t consecutive steps, contains Ω(t) linked pairs:

S

5

S

2

S

3

S

4

S

6

S

7

S

8

S

1

S

9

(S

1

, S

4

) (S

2

, S

5

) (S

6

, S

9

)

∆ Linked ≈ 1/(n 3+1/3 log 2/3 n).

(43)

Extensions and Open Problems

1 Introduction

2 Lower Bound

3 Upper Bound

4 Extensions and Open Problems

(44)

Smoothed Analysis

Smoothed Analysis

Each point i ∈ {1, . . . , n} is chosen independently according to a

probability density f i : [0, 1] 2 → [0, φ].

(45)

Smoothed Analysis

Smoothed Analysis

Each point i ∈ {1, . . . , n} is chosen independently according to a probability density f i : [0, 1] 2 → [0, φ].

1/ √ φ

1/ √

φ

(46)

Smoothed Analysis

General TSP Euclidean metric Manhattan metric average n 3+o(1) O(n ˜ 4.33 ) O(n ˜ 4 ) smoothed m · n 1+o (1) · φ O ˜ (n 4.33 · φ 2.67 ) O(n ˜ 4 · φ)

worst-case 2 Ω(n) 2 Ω(n) 2 Ω(n)

(47)

Approximation Ratio

Worst Case: O(log n)

Worst Case: Ω(log n/ log log n)

Average Case: O(1) Smoothed: O( √

φ)

(48)

Approximation Ratio

Worst Case: O(log n)

Worst Case: Ω(log n/ log log n) Average Case: O(1)

Smoothed: O( √

φ)

(49)

Approximation Ratio

Worst Case: O(log n)

Worst Case: Ω(log n/ log log n) Average Case: O(1)

Smoothed: O( √

φ)

(50)

Open Problems

Worst-Case Analysis

Analyze the diameter of the 2-Opt state graph.

Analyze particular pivot rules like “largest improvement”.

Probabilistic Analysis

Show exact bounds on the running time of 2-Opt and k-Opt.

Show small constant approximation ratio for 2-Opt on random

Euclidean instances.

(51)

The End

Thanks!

Questions?

Referenzen

ÄHNLICHE DOKUMENTE

Even if a mechanism provides agents with an ordinally efficient random assignment at a reported preference profile, the random assignment induced by the mechanism might not be

efficient algorithms (ellipsoid, interior point) Simplex method performs well in practice?. Knapsack Problem (KP) NP-hard,

We say that a linear binary optimization problem Π can be solved in pseudo-linear time if there exists an algorithm whose running time on instances with integer coefficients is

For most deterministic pivot rules that have been suggested, examples are known showing that in the worst case the simplex algorithm can take an exponential number of steps, but

Using similar methods, we improve the best known smoothed upper bound for the popular k-means method to n O(k) , once again independent of the

For instances in which n points are placed uniformly at random in the unit square and the distances are measured according to the Manhattan metric, Chandra, Karloff, and Tovey show

local scale: The widespread use of term resilience in the national policy documents is not reflected on local level and is often at odds with the practical understanding

By aggregating the results of all the tests performed, the result for a single test run of 64 instances could be derived (PHYB.XL). The combination of parallelization and