• Keine Ergebnisse gefunden

Lifting to paraboloids

N/A
N/A
Protected

Academic year: 2021

Aktie "Lifting to paraboloids"

Copied!
80
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Lifting to paraboloids

Clustering — k-center, k-median

S´andor Kisfaludi-Bak

Computaional Geometry Summer semester 2020

(2)

Overview

• Lifting to paraboloids: Delaunay, Voronoi Edelsbrunner–Seidel (1986)

(3)

Overview

• Lifting to paraboloids: Delaunay, Voronoi Edelsbrunner–Seidel (1986)

• Metric space, clustering

(4)

Overview

• Lifting to paraboloids: Delaunay, Voronoi Edelsbrunner–Seidel (1986)

• Metric space, clustering

• k-center, greedy clustering

(5)

Overview

• Lifting to paraboloids: Delaunay, Voronoi Edelsbrunner–Seidel (1986)

• Metric space, clustering

• k-center, greedy clustering

• k-median, local search 1

(6)

Lifting to a paraboloid

L(x, y) = (x, y, x2 + y2)

L projects (x, y) vertically up to the paraboloid A : z = x2 + y2

A

(7)

Lifting to a paraboloid

L(x, y) = (x, y, x2 + y2)

L projects (x, y) vertically up to the paraboloid A : z = x2 + y2

A γ : (x − x0)2 + (y − y0)2 = r2

(8)

Lifting to a paraboloid

L(x, y) = (x, y, x2 + y2)

L projects (x, y) vertically up to the paraboloid A : z = x2 + y2

A γ : (x − x0)2 + (y − y0)2 = r2

(x, y) ∈ γ ⇒

x2 + y2 = r2 + 2xx0 + 2yy0 − x20 − y02

= α1x + α2y + c

(9)

Lifting to a paraboloid

L(x, y) = (x, y, x2 + y2)

L projects (x, y) vertically up to the paraboloid A : z = x2 + y2

A γ : (x − x0)2 + (y − y0)2 = r2

(x, y) ∈ γ ⇒

x2 + y2 = r2 + 2xx0 + 2yy0 − x20 − y02

= α1x + α2y + c

L(x, y) = (x, y, α1x + α2y + c)

L(γ) ⊂ Hγ := {(x, y, z) | −α1x − α2y + z = c}

(10)

A

p p0 p00

Lifting an empty circumcircle

pp0p00 is a Delaunay-triangle of P

γ = circumcircle of pp0p00 is empty

A ∩ Hγ is empty

Hγ is a face of conv(L(P ))

(11)

A

p p0 p00

Lifting an empty circumcircle

pp0p00 is a Delaunay-triangle of P

γ = circumcircle of pp0p00 is empty

A ∩ Hγ is empty

Hγ is a face of conv(L(P ))

DT (P ) = projz=0(conv(L(P )))

(12)

Lifting a paraboloid

Bx0,y0 L(x, y, z) = (x, y, z + x2 + y2)

Lifting all of R3:

(x0, y0)

Bx0,y0 = {(x, y, z) | z = −(x − x0)2 − (y − y0)2}

(13)

Lifting a paraboloid

Bx0,y0 L(x, y, z) = (x, y, z + x2 + y2)

Lifting all of R3:

(x0, y0)

Bx0,y0 = {(x, y, z) | z = −(x − x0)2 − (y − y0)2}

L(x, y, −(x0 − x)2 − (y0 − y)2) = (x, y, x02 + y02 + 2x0x + 2y0y)

(14)

Lifting a paraboloid

Bx0,y0 L(x, y, z) = (x, y, z + x2 + y2)

Lifting all of R3:

(x0, y0)

Bx0,y0 = {(x, y, z) | z = −(x − x0)2 − (y − y0)2}

L(x, y, −(x0 − x)2 − (y0 − y)2) = (x, y, x02 + y02 + 2x0x + 2y0y)

a plane!

touches A at L(x0, y0)

(15)

Bp p

Lifting many paraboloids: Voronoi

p0

Bp0 Opaque hanging paraboloid

Bp for each p ∈ P .

q

q dist(q, p0) = dist(q, p)

q ∈ Bp ∩ Bq

(16)

Bp p

Lifting many paraboloids: Voronoi

p0

Bp0 Opaque hanging paraboloid

Bp for each p ∈ P .

q

q dist(q, p0) = dist(q, p)

q ∈ Bp ∩ Bq

upper envelope of S

p∈P Bp looks like Vor(P ) from (0, 0, ∞) upper envelope of S

p∈P Bp looks like Vor(P ) from (0, 0, ∞) Apply L(.): polyhedron Bb with face L(Bp) touching A at L(p).

L does not change view from (0, 0, ∞)

(17)

Bp p

Lifting many paraboloids: Voronoi

p0

Bp0 Opaque hanging paraboloid

Bp for each p ∈ P .

q

q dist(q, p0) = dist(q, p)

q ∈ Bp ∩ Bq

upper envelope of S

p∈P Bp looks like Vor(P ) from (0, 0, ∞) upper envelope of S

p∈P Bp looks like Vor(P ) from (0, 0, ∞) Apply L(.): polyhedron Bb with face L(Bp) touching A at L(p).

L does not change view from (0, 0, ∞)

Vor(P ) = projz=0(Bb) = projz=0

\

p∈P

touchplaneA(L(p))

(18)

Voronoi and Delaunay in higher dimensions?

Voronoi and Delaunay in higher dimensions?

Paraboloid lifting works in Rd.

Vor(P ) and DT (P) are projections of convex hulls in Rd+1.

(19)

Voronoi and Delaunay in higher dimensions?

Voronoi and Delaunay in higher dimensions?

Paraboloid lifting works in Rd.

Vor(P ) and DT (P) are projections of convex hulls in Rd+1.

• Vor(P ) and DT (P ) in Rd have complexity O(ndd/2e)

(20)

Voronoi and Delaunay in higher dimensions?

Voronoi and Delaunay in higher dimensions?

Paraboloid lifting works in Rd.

Vor(P ) and DT (P) are projections of convex hulls in Rd+1.

• Vor(P ) and DT (P ) in Rd have complexity O(ndd/2e)

• Vor(P ) and DT (P ) in Rd can be computed by convex hull algorithm in Rd+1

(21)

Voronoi and Delaunay in higher dimensions?

Voronoi and Delaunay in higher dimensions?

Paraboloid lifting works in Rd.

Vor(P ) and DT (P) are projections of convex hulls in Rd+1.

• Vor(P ) and DT (P ) in Rd have complexity O(ndd/2e)

• Vor(P ) and DT (P ) in Rd can be computed by convex hull algorithm in Rd+1

R3: e.g. skew lines have Vor(P ) complexity Θ(n2)

n/2

n/2

saddle

n

2 × n2 grid

(22)

Clustering variants in metric spaces

(23)

Metric spaces and clustering

Definition. (X, dist) metric space with distance dist : X × X → R≥0 iff ∀a, b, c ∈ X:

• dist(a, b) = dist(b, a) (symmetric)

• dist(a, b) = 0 ⇔ a = b

• dist(a, b) + dist(b, c) ≥ dist(a, c) (triangle ineq.)

(24)

Metric spaces and clustering

Definition. (X, dist) metric space with distance dist : X × X → R≥0 iff ∀a, b, c ∈ X:

• dist(a, b) = dist(b, a) (symmetric)

• dist(a, b) = 0 ⇔ a = b

• dist(a, b) + dist(b, c) ≥ dist(a, c) (triangle ineq.)

Clustering:

given data, find similar entries and put them together

(25)

Metric spaces and clustering

Definition. (X, dist) metric space with distance dist : X × X → R≥0 iff ∀a, b, c ∈ X:

• dist(a, b) = dist(b, a) (symmetric)

• dist(a, b) = 0 ⇔ a = b

• dist(a, b) + dist(b, c) ≥ dist(a, c) (triangle ineq.)

Clustering:

given data, find similar entries and put them together Given P ⊆ X, find a set of k centers C ⊆ X s.t.

vecC :=

dist(p1, C), dist(p2, C), . . . , dist(pn, C)

is

”small”

(26)

Clustering variants

• k-center:

min

C⊂X,|C|=kkvecCk = min

C⊂X,|C|=k max

p∈P dist(p, C)

“minimize the max distance to nearest center”

(27)

Clustering variants

• k-center:

min

C⊂X,|C|=kkvecCk = min

C⊂X,|C|=k max

p∈P dist(p, C)

“minimize the max distance to nearest center”

a.k.a. cover X with k disks of radius r, minimizing r

(28)

Clustering variants

• k-center:

min

C⊂X,|C|=kkvecCk = min

C⊂X,|C|=k max

p∈P dist(p, C)

“minimize the max distance to nearest center”

a.k.a. cover X with k disks of radius r, minimizing r

• k-median:

min

C⊂X,|C|=kkvecCk1 = min

C⊂X,|C|=k

X

p∈P

dist(p, C)

“minimize sum of distances to nearest center”

(29)

Clustering variants

• k-center:

min

C⊂X,|C|=kkvecCk = min

C⊂X,|C|=k max

p∈P dist(p, C)

“minimize the max distance to nearest center”

a.k.a. cover X with k disks of radius r, minimizing r

• k-median:

min

C⊂X,|C|=kkvecCk1 = min

C⊂X,|C|=k

X

p∈P

dist(p, C)

“minimize sum of distances to nearest center”

• k-means:

min

C⊂X,|C|=kkvecCk2 = min

C⊂X,|C|=k

sX

p∈P

dist(p, C)2

“minimize sum of squared distances to nearest center”

(30)

Clustering variants

• k-center:

min

C⊂X,|C|=kkvecCk = min

C⊂X,|C|=k max

p∈P dist(p, C)

“minimize the max distance to nearest center”

a.k.a. cover X with k disks of radius r, minimizing r

• k-median:

min

C⊂X,|C|=kkvecCk1 = min

C⊂X,|C|=k

X

p∈P

dist(p, C)

“minimize sum of distances to nearest center”

• k-means:

min

C⊂X,|C|=kkvecCk2 = min

C⊂X,|C|=k

sX

p∈P

dist(p, C)2

“minimize sum of squared distances to nearest center”

P

P

P

C ⊆ P : discrete clustering

C ⊆ X: continuous clustering

(31)

Facility location

Opening a center at x ∈ X has cost γ(x). Total cost is X

x∈C

γ(x) + kvecCk1

“Hip” topic.

(32)

k -center via greedy

(33)

Hardness of k -center

Theorem (Feder–Greene 1988). There is no polynomial time 1.8-approximation for k-center in R2, unless P = N P .

(34)

Hardness of k -center

Reduction from planar vertex cover of max degree 3

Theorem (Feder–Greene 1988). There is no polynomial time 1.8-approximation for k-center in R2, unless P = N P .

(35)

Hardness of k -center

Reduction from planar vertex cover of max degree 3 Double subdivision:

Makes equivalent instance of V C with k → k + 1.

Theorem (Feder–Greene 1988). There is no polynomial time 1.8-approximation for k-center in R2, unless P = N P .

(36)

Hardness of k -center

Reduction from planar vertex cover of max degree 3 Double subdivision:

Makes equivalent instance of V C with k → k + 1.

G G0

∈ [π − ε, π + ε]

∈ [3 − ε, 3 + ε]

Subdivide, get length 2 edges and ”smooth” turns only:

Theorem (Feder–Greene 1988). There is no polynomial time 1.8-approximation for k-center in R2, unless P = N P .

(37)

P := edge midpoints of smooth drawing of G0

∈ P

u 1 1 v

Hardness of k -center: disk radii

(38)

P := edge midpoints of smooth drawing of G0

∈ P

u 1 1 v

Hardness of k -center: disk radii

∃ VC of size k in G0

∃ k-center with radius 1

(39)

P := edge midpoints of smooth drawing of G0

∈ P

u 1 1 v

Hardness of k -center: disk radii

≥ 2 · 1.8

∃ VC of size k in G0

∃ k-center with radius 1

Otherwise needs ≥ 1 disk

covering 2 non-neighbors u, v dist(u, v) ≥ 2 · 1.8

⇒ r ≥ 1.8

(40)

Greedy centers

Given C ⊆ P , the greedy next center is q ∈ P where dist(q, C) is maximized.

Greedy clustering:

start with arbitrary c1 ∈ P . For i = 2, . . . , k:

Let ci = GreedyNext(c1, . . . , ci−1).

Return {c1, . . . , ck}

(41)

Greedy centers

Given C ⊆ P , the greedy next center is q ∈ P where dist(q, C) is maximized.

Greedy clustering:

start with arbitrary c1 ∈ P . For i = 2, . . . , k:

Let ci = GreedyNext(c1, . . . , ci−1).

Return {c1, . . . , ck}

Let ri = maxp∈P dist(p, {c1, . . . , ci}).

Balls of radius ri with centers {c1, . . . , ci} cover P for any i.

⇒ rk, {c1, . . . , ck} is valid k-center

(42)

Greedy centers

Given C ⊆ P , the greedy next center is q ∈ P where dist(q, C) is maximized.

Greedy clustering:

start with arbitrary c1 ∈ P . For i = 2, . . . , k:

Let ci = GreedyNext(c1, . . . , ci−1).

Return {c1, . . . , ck}

Store most distant center and update in each step

⇒ O(nk) time

Let ri = maxp∈P dist(p, {c1, . . . , ci}).

Balls of radius ri with centers {c1, . . . , ci} cover P for any i.

⇒ rk, {c1, . . . , ck} is valid k-center

(43)

Greedy k-center approximation quality

Proof

Theorem. Greedy k-center gives a 2-approximation.

r1 ≥ r2 ≥ · · · ≥ rk

(44)

Greedy k-center approximation quality

Proof

Theorem. Greedy k-center gives a 2-approximation.

r1 ≥ r2 ≥ · · · ≥ rk

ck+1 := point realizing rk If i < j ≤ k + 1, then

dist(ci, cj) ≥ dist(cj, {c1, . . . , cj−1}) = rj−1 ≥ rk

(45)

Greedy k-center approximation quality

Proof

Theorem. Greedy k-center gives a 2-approximation.

r1 ≥ r2 ≥ · · · ≥ rk

ck+1 := point realizing rk If i < j ≤ k + 1, then

dist(ci, cj) ≥ dist(cj, {c1, . . . , cj−1}) = rj−1 ≥ rk ropt := is optimal k-cover radius, suppose 2ropt < rk

(46)

Greedy k-center approximation quality

Proof

Theorem. Greedy k-center gives a 2-approximation.

r1 ≥ r2 ≥ · · · ≥ rk

ck+1 := point realizing rk If i < j ≤ k + 1, then

dist(ci, cj) ≥ dist(cj, {c1, . . . , cj−1}) = rj−1 ≥ rk ropt := is optimal k-cover radius, suppose⇒ 2ropt < rk

each ball in opt has ≤ 1 pt from c1, . . . , ck+1

(47)

r-packing from greedy

Definition. S ⊂ X is an r-packing if

• r-balls cover X: dist(x, S) ≤ r for each x ∈ X

• S is sparse: dist(s, s0) ≥ r for each s, s0 ∈ S

(48)

r-packing from greedy

Definition. S ⊂ X is an r-packing if

• r-balls cover X: dist(x, S) ≤ r for each x ∈ X

• S is sparse: dist(s, s0) ≥ r for each s, s0 ∈ S

Theorem. For any i, {c1, . . . , ci} is an ri-packing.

(49)

Exact k-center in R

d

, approximating k

nO(

k)

Trivial: O(nk+1)

R2 or 2O(

n)

(50)

Exact k-center in R

d

, approximating k

nO(

k)

Trivial: O(nk+1)

R2 or 2O(

n)

no no(k) known 2O(n1−1/d) Rd, d = const.

(51)

Exact k-center in R

d

, approximating k

nO(

k)

Trivial: O(nk+1)

R2 or 2O(

n)

no no(k) known 2O(n1−1/d)

“optimal”

Rd, d = const.

(52)

Exact k-center in R

d

, approximating k

nO(

k)

Trivial: O(nk+1)

R2 or 2O(

n)

no no(k) known 2O(n1−1/d)

“optimal”

Fix r, approximate k instead:

poly (1 + ε)-approximation for any fixed d, ε (PTAS) Rd, d = const.

(53)

Exact k-center in R

d

, approximating k

nO(

k)

Trivial: O(nk+1)

R2 or 2O(

n)

no no(k) known 2O(n1−1/d)

“optimal”

Fix r, approximate k instead:

poly (1 + ε)-approximation for any fixed d, ε (PTAS) Rd, d = const.

Later lectures!

(54)

k -median

(55)

k-median via local search

(56)

k-median via local search

• Compute C = {c1, . . . , ck} and rk : k-center 2-approx.

Gives 2n-approx for k-median as

kvecCk1 ≤ nkvecCk so OPT(k-med)≤ nOPT(k-cent)≤ 2nrk

• Iteratively replace c ∈ C with c0 if it improves kvecCk1 (by at least factor 1 − τ, τ = 10k1 )

⇒ Results in local opt center set L

(57)

k-median via local search

• Compute C = {c1, . . . , ck} and rk : k-center 2-approx.

Gives 2n-approx for k-median as

kvecCk1 ≤ nkvecCk so OPT(k-med)≤ nOPT(k-cent)≤ 2nrk

• Iteratively replace c ∈ C with c0 if it improves kvecCk1 (by at least factor 1 − τ, τ = 10k1 )

⇒ Results in local opt center set L

Running time: O(nk) possible swaps, O(nk) to compute new distances. At most log 1

1−τ 2n swaps.

O((nk)2 log 1

1−τ 2n) = O((nk)2 log1+τ n)

= O((nk)2 · 10k log n) = O(k3n2 log n)

(58)

k-median: quality of approximation

Theorem. The local optimum L gives a 5-approximation for k-median.

Challange: L and OP T may be very different.

Idea: use “intermediate” clustering Π to relate them OP T

L

(59)

k-median: quality of approximation

Theorem. The local optimum L gives a 5-approximation for k-median.

Challange: L and OP T may be very different.

Idea: use “intermediate” clustering Π to relate them OP T

L

o

assign cluster of center o OP T to nn(o, L)

(60)

k-median: quality of approximation

Theorem. The local optimum L gives a 5-approximation for k-median.

Challange: L and OP T may be very different.

Idea: use “intermediate” clustering Π to relate them OP T

Π L

like L, but respects clusters of OP T

(61)

Cost of moving from L to Π

Π(p), L(p), OP T(p) be the center (= nearest neighbor) of p in each clustering.

(62)

Cost of moving from L to Π

Π(p), L(p), OP T(p) be the center (= nearest neighbor) of p in each clustering.

dist(p, Π(p)) ≤ dist(p, OP T (p)) + dist(OP T (p), Π(p))

≤ dist(p, OP T (p)) + dist(OP T (p), L(p))

≤ dist(p, OP T (p)) + dist(OP T (p), p) + dist(p, L(p))

= 2dist(p, OP T (p)) + dist(p, L(p)) Claim. kvecΠk1 − kvecLk1 ≤ 2kvecOP T k1.

(63)

Cost of moving from L to Π

Π(p), L(p), OP T(p) be the center (= nearest neighbor) of p in each clustering.

dist(p, Π(p)) ≤ dist(p, OP T (p)) + dist(OP T (p), Π(p))

≤ dist(p, OP T (p)) + dist(OP T (p), L(p))

≤ dist(p, OP T (p)) + dist(OP T (p), p) + dist(p, L(p))

= 2dist(p, OP T (p)) + dist(p, L(p)) For c ∈ L, the cost of reassigning its cluster to Π is ran(c) := P

p∈Cl(L,c)\Cl(Π,c)

dist(p, Π(p)) − dist(p, L(p))

claim ⇒ P

c∈L ran(c) ≤ 2kvecOP T k1

Claim. kvecΠk1 − kvecLk1 ≤ 2kvecOP T k1.

(64)

L

0

, L

1

, L

≥2

, OP T

1

, OP T

≥2

c ∈ L may be assigned to 0, 1 , or ≥ 2 centers of OP T . L = L0 ∪ L1 ∪ L≥2

(65)

L

0

, L

1

, L

≥2

, OP T

1

, OP T

≥2

c ∈ L may be assigned to 0, 1 , or ≥ 2 centers of OP T . L = L0 ∪ L1 ∪ L≥2

OP T1: subset of OP T assigned to L1

OP T≥2: subset of OP T assigned to L≥2 OP T = OP T1 ∪ OP T≥2

(66)

L

0

, L

1

, L

≥2

, OP T

1

, OP T

≥2

c ∈ L may be assigned to 0, 1 , or ≥ 2 centers of OP T . L = L0 ∪ L1 ∪ L≥2

OP T1: subset of OP T assigned to L1

OP T≥2: subset of OP T assigned to L≥2 OP T = OP T1 ∪ OP T≥2

For o ∈ OP T , cost(o) and localcost(o) is the cost of Cluster(o, OP T ) in OP T and L

(67)

L

0

, L

1

, L

≥2

, OP T

1

, OP T

≥2

c ∈ L may be assigned to 0, 1 , or ≥ 2 centers of OP T . L = L0 ∪ L1 ∪ L≥2

OP T1: subset of OP T assigned to L1

OP T≥2: subset of OP T assigned to L≥2 OP T = OP T1 ∪ OP T≥2

For o ∈ OP T , cost(o) and localcost(o) is the cost of Cluster(o, OP T ) in OP T and L

Lemma. For c ∈ L0 and o ∈ OP T we have

localcost(o) ≤ ran(c) + cost(o).

Proof. Removing c and adding o to L does not improve:

0 ≤ ran(c) − localcost(o) + cost(o).

(68)

Bounding the contribution of OP T

≥2

Since |L1| = |OP T1| (mathcing) and

|L0| + |L1| + |L≥2| = |OP T1| + |OP T≥2| = k

|L0| = |OP T≥2| − |L≥2| ≥ |OP T≥2|/2

(69)

Bounding the contribution of OP T

≥2

Since |L1| = |OP T1| (mathcing) and

|L0| + |L1| + |L≥2| = |OP T1| + |OP T≥2| = k

|L0| = |OP T≥2| − |L≥2| ≥ |OP T≥2|/2 Lemma.

X

o∈OP T≥2

localcost(o) ≤ 2 X

c∈L0

ran(c) + X

o∈OP T≥2

cost(o)

(70)

Bounding the contribution of OP T

≥2

Since |L1| = |OP T1| (mathcing) and

|L0| + |L1| + |L≥2| = |OP T1| + |OP T≥2| = k

|L0| = |OP T≥2| − |L≥2| ≥ |OP T≥2|/2

Proof. Let c ∈ L0 minimize ran(c). Earlier lemma:

localcost(o) ≤ ran(c) + cost(o) Summing over o ∈ OP T≥2:

X

o∈OP T≥2

localcost(o) ≤ |OP T≥2|ran(c) + X

o∈OP T≥2

cost(o) Lemma.

X

o∈OP T≥2

localcost(o) ≤ 2 X

c∈L0

ran(c) + X

o∈OP T≥2

cost(o)

(71)

Bounding the contribution of OP T

1

Lemma.

X

o∈OP T1

localcost(o) ≤ X

o∈OP T1

ran(L(o)) + X

o∈OP T1

cost(o)

(72)

Bounding the contribution of OP T

1

Proof. o ∈ OP T1 is assigned to L(o) = Π(o).

Claim: localcost(o) ≤ ran(L(o)) + cost(o).

Replacing L(o) with o in L doesn’t improve.

Lemma.

X

o∈OP T1

localcost(o) ≤ X

o∈OP T1

ran(L(o)) + X

o∈OP T1

cost(o)

Potential increased prices in Cl(L, L(o)) ∪ Cl(OP T, o).

Replace cost in

Cl(L, L(o)) \ Cl(OP T, o)

is ran(L(o)).

Replace cost in Cl(OP T, o) is ≤ −localcost(o) + cost(o).

⇒ 0 ≤ ran(L(o)) − localcost(o) + cost(o).

(73)

Theorem. The local optimum L gives a 5-approximation for k-median.

k-median approximation quality wrap-up

(74)

Theorem. The local optimum L gives a 5-approximation for k-median.

k-median approximation quality wrap-up

kvecLk1 = X

o∈OP T1

localcost(o) + X

o∈OP T≥2

localcost(o)

≤ X

c∈L0

ran(c) + X

o∈OP T≥2

cost(o)

+ X

o∈OP T1

ran(L(o)) + X

o∈OP T1

cost(o)

≤ 2 X

c∈L

ran(c) + X

o∈OP T

cost(o)

≤ 4kvecOP T k1 + kvecOP T k1

(75)

k-median, k-means with local search

Theorem. For any ε > 0 the local optimum L wrp.

1 − τ-improvements (τ := ε/10k) gives a 5 + ε-approximation for k-median in O(n2k3 logε n) time.

(76)

k-median, k-means with local search

Theorem. For any ε > 0 the local optimum L wrp.

1 − τ-improvements (τ := ε/10k) gives a 5 + ε-approximation for k-median in O(n2k3 logε n) time.

→ Can get 3 + 2/p-approx with p-swaps (tight)

(77)

k-median, k-means with local search

Theorem. For any ε > 0 the local optimum L wrp.

1 − τ-improvements (τ := ε/10k) gives a 5 + ε-approximation for k-median in O(n2k3 logε n) time.

→ Can get 3 + 2/p-approx with p-swaps (tight)

Theorem. For any ε > 0 local search gives a

25 + ε-approximation for k-means in O(n2k3 logε n) time.

→ Can get (3 + 2/p)2-approx with p-swaps (tight)

(78)

k-median, k-means in R

d

(79)

k-median, k-means in R

d

For k-means with constant d, local search with

(1/ε)Θ(1)-swaps gives PTAS. (e.g. Cohen-Addad et al. 2019) k-median is NP-hard if k, d both in input. (Guruswami–Indyk 2003), but if at least one is cosntant, there is a PTAS.

(80)

Next week:

SoCG 2020! Check it out.

Referenzen

ÄHNLICHE DOKUMENTE

Research scientist Geldu Kutliev from the Scientic Production Unit SOLNCE Academy of Sciences the Tur... Glushkov Institute of Cybernetics (Kiev,

 Innan du använder Lifting Seat ska du säkerställa att brukaren inte har skador eller svagheter som gör det uteslutet att använda enheten.  Dra åt remmarna så hårt att

Using this approach, it is possible to solve variational problems for manifold- valued images that consist of a possibly non-convex data term and an arbitrary, smooth or

Per ragioni di sicurezza considerare sempre che il peso dello stampo deve essere sopportato da due soli elementi di sollevamento.. I

Many years experience combined with the know-how gained from hundreds of different applications, mean that Schenck Process can offer the optimum technical and economical solution to

Rechtzeitig zu Beginn der Wintersaison 1989/90 wartet der Sport- und Kongreßort Davos mit einigen neuen Attraktionen auf: Viele Ho- tels und Privatquartiere wurden modernisiert;

For any point x on the bisector, x belongs to VR(p, S) The bisector extends to the infinity.. If S is in convex position, V (S) is

Given a set S of points on the plane, a triangulation is maximal collection of non-crossing line segments among S.. Crossing