Locality Sensitive Distributed Computing - PowerPoint PPT Presentation

About This Presentation
Title:

Locality Sensitive Distributed Computing

Description:

Constructions and applications. Part 2: Representations. Clustered representations ... Arbitrarily pick cluster S0 in S (as kernel Y of cluster Z constructed next) ... – PowerPoint PPT presentation

Number of Views:61
Avg rating:3.0/5.0
Slides: 231
Provided by: yuv9
Category:

less

Transcript and Presenter's Notes

Title: Locality Sensitive Distributed Computing


1
Locality Sensitive Distributed Computing
  • David PelegWeizmann Institute

2
Structure of mini-course
  • Basics of distributed network algorithms
  • Locality-preserving network representations
  • Constructions and applications

3
Part 2 Representations
  • Clustered representations
  • Basic concepts clusters, covers, partitions
  • Sparse covers and partitions
  • Decompositions and regional matchings
  • Skeletal representations
  • Spanning trees and tree covers
  • Sparse and light weight spanners

4
Basic idea of locality-sensitive distributed
computing
  • Utilize locality to both
  • simplify control structures and algorithms and
  • reduce their costs
  • Operation performed in large network may concern
    few processors in small region
  • (Global operation may have local sub-operations)
  • Reduce costs by utilizing locality of reference

5
Components of locality theory
  • General framework, complexity measures and
    algorithmic methodology
  • Suitable graph-theoretic structures and efficient
    construction methods
  • Adaptation to wide variety of applications

6
Fundamental approach
  • Clustered representation
  • Impose clustered hierarchical organization on
    given network
  • Use it efficiently for bounding complexity of
    distributed algorithms.
  • Skeletal representation
  • Sparsify given network
  • Execute applications on remaining skeleton,
    reducing complexity

7
Clusters, covers and partitions
Cluster connected subset of vertices S ? V
8
Clusters, covers and partitions
Cover of G(V,E,w) collection of
clusters SS1,...,Sm containing all vertices of
G (i.e., s.t. S V).
9
Partitions
Partial partition of G collection of
disjoint clusters S S1,...,Sm, i.e., s.t. Si Å
Sj ? Partition cover partial partition
10
Evaluation criteria
Locality and Sparsity Locality level cluster
radius Sparsity level vertex / cluster degrees
11
Evaluation criteria
  • Locality - sparsity tradeoff
  • locality and sparsity parameters
  • go opposite ways
  • better sparsity ? worse locality
  • (and vice versa)

12
Evaluation criteria
Locality measures Weighted distances Length of
path (e1,...,es) ?1is w(ei) dist(u,w,G)
(weighted) length of shortest path dist(U,W)
min dist(u,w) u?U, w?W
13
Evaluation criteria
  • Diameter, radius As before, except weighted
  • Denote logD dlog Diam(G)e
  • For collection of clusters S
  • Diam(S) maxi Diam(Si)
  • Rad (S) maxi Rad (Si)

14
Neighborhoods
G(v) neighborhood of v set of neighbors in
G (including v itself)
G(v)
15
Neighborhoods
Gl(v) l-neighborhood of v vertices at
distance l or less from v
G0(v)
G1(v)
G2(v)
16
Neighborhood covers
For W ? V Gsl(W) l-neighborhood cover of W
Gl(v) v?W (collection of
l-neighborhoods of W vertices)
17
Neighborhood covers
E.g Gs0 (V) partition into singleton clusters
18
Neighborhood covers
E.g Gs1 (W) cover of W nodes by neighborhoods
W colored nodes
Gs1 (W)
19
Sparsity measures
Different representations ? Different ways to
measure sparsity
20
Cover sparsity measure - overlap
deg(v,S) occurrences of v in clusters
S?S i.e., degree of v in hypergraph (V,S)
DC(S) maximum degree of cover S AvD(S)
average degree of S ?v?V deg(v,S)
/ n ?S?S S / n
v
deg(v) 3
21
Partition sparsity measure - adjacency
Intuition contract clusters into
super-nodes, look at resulting cluster graph of
S, G(S)(S, E)
22
Partition sparsity measure - adjacency
G(S)(S, E) E(S,S') S,S? S, G contains
edge (u,v) for u ? S and v ? S'
E edges inter-cluster edges
23
Cluster-neighborhood
Def Given partition S, cluster S ? S, integer
l0
Cluster-neighborhood of S neighborhood of S
in cluster graph G(S) Gcl(S,G) Gl(S,G(S))
Gc(S,G)
S
24
Sparsity measure
Average cluster-degree of partition S AvDc(S)
SS?S Gc(S) / n
Note AvDc(S) inter-cluster edges
25
Example A basic construction
Goal produce a partition S with 1. clusters of
radius k 2. few inter-cluster edges (or, low
AvDc(S)) Algorithm BasicPart Algorithm operates
in iterations, each constructing one cluster
26
Example A basic construction
At end of iteration - Add resulting cluster S to
output collection S - Discard it from V - If V is
not empty then start new iteration
27
Iteration structure
  • Arbitrarily pick a vertex v from V
  • Grow cluster S around v, adding layer by layer
  • Vertices added to S are discarded from V

28
Iteration structure
  • Layer merging process is carried repeatedly until
    reaching required sparsity condition
  • next iteration increases vertices
  • by a factor of lt n1/k
  • (I.e., G(S) lt S n1/k)

29
Analysis
  • Av-Deg-Partition Thm
  • Given n-vertex graph G(V,E), integer k1,
  • Alg. BasicPart creates a partition S satisfying
  • Rad(S) k-1,
  • inter-cluster edges in G(S) n11/k
  • (or, AvDc(S) n1/k)

30
Analysis (cont)
  • Proof
  • Correctness
  • Every S added to S is (connected) cluster
  • The generated clusters are disjoint
  • (Alg erases from V every v added to cluster)
  • S is a partition (covers all vertices)

31
Analysis (cont)
Property (2) E(G(S)) n11/k
By termination condition of internal loop, the
resulting S satisfies G(S) n1/kS ?(
inter-cluster edges touching S)
n1/kS Number can only decrease in later
iterations, if adjacent vertices get merged into
same cluster ? E ?S?S n1/k S n11/k
32
Analysis (cont)
Property (1) Rad(S) k-1
Consider iteration of main loop. Let J
times internal loop was executed Let Si S
constructed on i'th internal iteration Si gt
n(i-1)/k for 2iJ (By induction on i)
33
Analysis (cont)
? J k (otherwise, S gt n) Note Rad(Si)
i-1 for every 1iJ (S1 is composed of a
single vertex, each additional layer
increases Rad(Si) by 1) ? Rad(SJ) k-1
34
Variant - Separated partial partitions
Sep(S) Separation of partial partition S
minimal distance between any two S clusters
When Sep(S)s, we say S is s-separated
Example 2-separated partial partition
35
Coarsening
Cover TT1,...,Tq coarsens S S1,...,Sp if S
clusters are fully subsumed in T clusters
S
?
T
36
Coarsening (cont)
The radius ratio of the coarsening Rad(T) /
Rad(S)
R / r
?
T
S
37
Coarsening (cont)
  • Motivation
  • Given useful S with high overlaps
  • Coarsen S by merging some clusters together,
    getting a coarsening cover T with
  • larger clusters
  • better sparsity
  • increased radii

38
Sparse covers
Goal For initial cover S, construct coarsening
T with low overlaps, paying little in cluster
radii
Inherent tradeoff
lower overlap ? higher radius ratio
(and vice versa)
Simple Goal Low average degree
39
Sparse covers
  • Algorithm AvCover
  • Operates in iterations
  • Each iteration merges together some S clusters
  • into one output cluster Z?T
  • At end of iteration
  • Add resulting cluster Z to output collection T
  • Discard merged clusters from S
  • If S is not empty then start new iteration

40
Sparse covers
Algorithm AvCover high-level flow
41
Iteration structure
  • Arbitrarily pick cluster S0 in S (as kernel Y of
    cluster Z constructed next)
  • Repeatedly merge cluster with intersecting
    clusters from S (adding one layer at a time)
  • Clusters added to Z are discarded from S

42
Iteration structure
- Layer merging process is carried repeatedly
until reaching required sparsity condition
adding next layer increases vertices by a
factor of n1/k (Z Y n1/k)
43
Analysis
  • Thm Given graph G(V,E,w), cover S, int k1,
  • Algorithm AvCover constructs a cover T s.t.
  • T coarsens S
  • Rad(T) (2k1) Rad(S) (radius ratio 2k1)
  • AvD(T) n1/k (low
    average sparsity)

44
Analysis (cont)
  • Corollary for l-neighborhood cover
  • Given G(V,E,w), integers k,l1,
  • there exists cover T Tl,k s.t.
  • T coarsens the neighborhood cover Gsl(V)
  • Rad(T) (2k1)l
  • AvD(T) n1/k

45
Analysis (cont)
Proof of Thm Property (1) T
coarsens S Holds directly from
construction (Each Z added to T is a (connected)
cluster, since at the beginning S contained
clusters)
46
Analysis (cont)
Claim The kernels Y corresponding to clusters Z
generated by the algorithm are mutually disjoint.
Proof By contradiction. Suppose there is a
vertex v s.t. v ? YÅY' W.l.o.g. suppose Y was
created before Y' v ? Y' ? There is a cluster
S' s.t. v?S' and S' was still in S when algorithm
started constructing Y'.
47
Analysis (cont)
But S' satisfies S'ÅY ? Ø ? The final merge
creating Z from Y should have added S' into Z and
eliminated it from S contradiction.
48
Output clusters and kernels
kernels
cover T
49
Analysis (cont)
Property (2) Rad(T) (2k1)Rad(S)
Consider some iteration of main loop (starting
with cluster S?S) J times internal loop was
executed. Z0 initial set Z Zi Z constructed
on i'th internal iteration (1iJ)
Respectively Zi,Yi
50
Analysis (cont)
Note 1 Zi gt ni/k, for every 1iJ-1, ? J k
Note 2 Rad(Yi) (2i-1)Rad(S), for every
1iJ ? Rad (YJ) (2k-1)Rad(S)
51
Analysis (cont)
Property (3) AvD(T) n1/k

AvD(T) ?Zi?T Zi / n ?Zi?T
Yin1/k / n n n1/k / n (Yis are
disjoint) n1/k
52
Partial partitions
  • Goal
  • Given initial cover R and integer k1,
  • construct a partial partition DT
  • subsuming a large subset of clusters DR ? R,
  • with low radius ratio.

53
Partial partitions (cont)
Procedure Part General structure and iterations
similar to Algorithm AvCover, except for two
differences Small difference Procedure keeps
also unmerged collections Y, Z of original R
clusters merged into Y and Z.
54
Partial partitions (cont)
Small difference (cont) Sparsity condition
concerns sizes of Y, Z, i.e., original clusters
captured by merge, and not sizes of Y, Z, i.e.,
vertices covered ? Merging ends when next
iteration increases clusters merged into Z by a
factor R1/k.
55
Main difference
Procedure removes all clusters in Z, but takes
into output collection DT only the kernel Y, not
the cluster Z
56
Main difference
Implication Each selected cluster Y has
additional external layer of R clusters around
it, acting as protective barrier providing
disjointness between different clusters Y, Y'
added to DT
57
Main difference
Note Not all R clusters are subsumed by DT
(E.g., those merged into some external layer
Z-Y will not be subsumed)
58
Analysis
Partial Partition Lemma Given graph G(V,E,w),
cluster collection R and integer k1, the
collections DT and DR constructed by Procedure
Part(R) satisfy 1. DT coarsens DR
(as before) 2. DT is a partial
partition (i.e., YÅY ? for every Y,Y ?
DT) (guaranteed by
construction) 3. DR R1-1/k (
clusters discarded
R1/k clusters taken) 4. Rad(DT)
(2k-1)Rad(R) (as before)
59
s-Separated partial partitions
Goal For initial l-neighborhood cover R , s,k1,
construct s-separated partial partition DT
subsuming a large subset of clusters DR ? R,
with low radius ratio.
60
s-Separated partial partitions (cont)
  • Procedure SepPart
  • Given R, construct modified collection R' of
    neighborhoods of radius l' ls/2
  • R Gl(v) v?W for some W?V
  • ? R' Gl'(v) v?W

61
s-Separated partial partitions (cont)
Example l1, s2 l' 2 R G1(v) v ?
W R' G2(v) v ? W
62
s-Separated partial partitions (cont)
  • Apply Procedure Part to R',
  • get partial partition DT'
  • and subsumed neighborhoods DR'
  • Transform DT' into required DT as follows
  • Shrink cluster T? DT' into T by eliminating from
    T' the vertices closer to its border
    than s/2
  • DR ? input neighborhoods corresponding to DR'

63
Analysis
  • Lemma
  • Given graph G(V,E,w), collection R of
  • l-neighborhoods and integers s,k,
  • the collections DT and DR constructed by
  • Procedure SepPart satisfy
  • DT coarsens DR
  • DT is an s-separated partial partition
  • DR R1-1/k
  • Rad(DT) (2k-1)l k s

64
Sparse covers with low max degree
Goal For initial cover S, construct coarsening
cover T with low max degree and cluster
ratio. Idea Reduce to sub-problem of partial
partition
65
Low max degree covers (cont)
  • Strategy
  • Given initial cover R and integer k1
  • Repeatedly select low radius partial partitions,
    each subsuming many clusters of R.
  • Their union should subsume all of R.
  • The resulting overlap partial partitions.

PP1
PP2
PP3
66
Low max degree covers (cont)
  • Algorithm MaxCover
  • Cover S clusters by several partial partitions
  • (repeatedly using Procedure Part on remaining
    clusters, until S is empty)
  • Merge the constructed partial partitions
    into the desired cover T

67
Low max degree covers (cont)
  • Max-Deg-Cover Thm
  • Given G(V,E,w), cover S, integer k1,
  • Algorithm MaxCover constructs cover T
  • satisfying
  • T coarsens S,
  • Rad(T) (2k-1) Rad(S),
  • DC(T) 2kS1/k

68
Analysis
Proof Define Ri contents of R at start of
phase i ri Ri DTi set DT added to T at
end of phase i, DRi set DR removed from R at
end of phase.
69
Analysis (cont)
Property (1) T
coarsens S Since T iDTi, S iDRi, and by
Partial Partition Lemma, DTi coarsens DRi for
every i. Property (2) Rad(T)
(2k-1) Rad(S) Directly by Partial Partition
Lemma
70
Analysis (cont)
Property (3) DC(T)
2kS1/k By Partial Partition Lemma, clusters
in DTi are disjoint
?
clusters v belongs to phases of algorithm
71
Analysis (cont)
Observation In every phase i, Ri clusters
removed from Ri satisfies DRi Ri1-1/k (by
Partial Partition Lemma) ? size of remaining Ri
shrinks as ri1 ri - ri1-1/k
72
Analysis (cont)
Claim Given recurrence xi1 xi - xia,
0ltalt1, let f(n) least index i s.t. xi1 given
x0n. Then f(n) lt ((1-a) ln 2)-1n1-a Conseque
ntly as r0S, S is exhausted after
2kS1/k phases of Algorithm MaxCover ? DC(T)
2kS1/k
73
Analysis (cont)
  • Corollary for l-neighborhood cover
  • Given G(V,E,w), integers k,l 1,
  • there exists cover T Tl,k satisfying
  • T coarsens Gsl(V)
  • Rad(T) (2k-1) l
  • DC(T) 2kn1/k

74
Covers based on s-separated partial partitions
Goal Cover coarsening neighborhood cover Gsl(V),
in which the partial partitions are well
separated. Method Substitute Procedure SepPart
for Procedure Part in Algorithm MaxCover.
75
Covers based on s-separated partial partitions
  • Thm
  • Given G(V,E,w), integers k,l 1,
  • there exists cover T Tl,k s.t.
  • T coarsens Gsl(V),
  • Rad(T) (2k-1) l ks,
  • DC(T) 2k n1/k,
  • each of the DC(T) layers of partial partitions
    composing T is s-separated.

76
Related graph representations
  • Network decomposition
  • Partition S is a (d,c)-decomposition of G(V,E) if
  • radius of clusters in G is Rad(S) d
  • chromatic number of cluster graph G(S) is
    c(G(S)) c

77
Example A (2,3)-decomposition
Rad(S) 2
c(G(S)) 3
78
Decomposition algorithm
Algorithm operates in iterations In each
iteration i - Invoke Procedure SepPart to
construct a 2-separated partial partition for
V At end of iteration - Assign color i to all
output clusters - Delete covered vertices from
V - If V is not empty then start new iteration
79
Decomposition algorithm (cont)
Main properties 1. Uses Procedure SepPart
instead of Part (i.e., guaranteed separation
2, not 1) ? Ensures all output clusters of a
single iteration can be colored by single
color 2. Each iteration applies only to
remaining nodes ? Clusters generated in different
iterations are disjoint.
80
Analysis
Thm Given G(V,E,w), k 1, there is a
(k,kn1/k)-decomposition. Proof Note Final
collection T is a partition (- each DT generated
by SepPart is a partial partition - vertices
added to DT of iteration i are removed from V)
81
Analysis (cont)
Iteration starting with R results with DR of size
DR W (R1-1/k) ? Process continues for
O(kn1/k) iterations ? End with O(kn1/k)
colors, and each cluster has
O(k) diameter. Picking klog n Corollary Every
n-vertex graph G has a
(log n,log n)-decomposition.
82
Skeletal representations
Spanner connected subgraph spanning all
nodes (Special case spanning tree)
Tree cover collection of trees covering G
83
Skeletal representations
Evaluation criteria Locality level stretch
factor Sparsity level edges As for clustered
representations, locality and sparsity parameters
go opposite ways better sparsity ? worse
locality
84
Stretch
Given a graph G(V,E,w) and a spanning subgraph
G'(V,E'), the stretch factor of G' is
Stretch(G') maxu,v?V dist(u,v,G) /
dist(u,v,G)
G
G'
Stretch(G') 2
85
Depth
Def Depth of v in tree T distance from
root DepthT(v) dist(v,r0,T) Depth(T) maxv
Depth(v,T) radius w.r.t. root Depth(T)
Rad(r0,T)
86
Sparsity measures
Def Given subgraph G'(V',E') of
G(V,E,w) w(G') weight of G' Se?E' w(e)
Size of G' edges, E'
87
Spanning trees - basic types
MST minimum-weight spanning tree of G
spanning tree TM minimizing w(TM)
SPT shortest paths tree of G w.r.t. given root
r0 spanning tree TS s.t. for every v? r0,
the path from r0 to v in the tree is the shortest
possible, or, Stretch(TS,r0)1
88
Spanning trees - basic types
BFS breadth-first tree of G w.r.t. given root
r0 spanning tree TB s.t. for every v?r0, path
from r0 to v in tree is shortest possible,
measuring path length in edges
89
Controlling tree degrees
  • deg(v,G) degree of v in G
  • D(G) max degree in G
  • Tree Embedding Thm
  • For every rooted tree T, integer m 1,
  • ? embedded virtual tree S with same node set,
  • same root (but different edge set), s.t.
  • 1. D(S) 2m
  • 2. Each edge of S has path of length 2 in T
  • 3. DepthS(v) (2logmD(T)-1) DepthT(v),
    for every v

90
Proximity-preserving spanners
Motivation How good is a shortest paths tree as
spanner? TS preserves distances in graph w.r.t.
root r0, i.e., achieves Stretch(TS,r0)1 However,
it fails to preserve distances w.r.t. vertex
pairs not involving r0 (or, to bound Stretch(TS)
)
Q Construct example where two neighboring
vertices in G are at distance 2Depth(T) in SPT
91
Proximity-preserving spanners
k-Spanner Given graph G(V,E,w) , the subgraph
G'(V,E') is a k-spanner of G if Stretch(G')
k Typical goal Find sparse (small size, small
weight) spanners with small stretch factor
92
Example - 2-spanner
93
Example - 2-spanner
94
Tree covers
Basic notion A tree T covering the
l-neighborhood of v
v
G2(V)
covering T
95
Tree covers (cont)
l-tree cover for graph G tree cover for Gsl(V)
collection TC of trees in G s.t. for every
v?V, there is a tree T?TC (denoted home(v) ),
spanning the l-neighborhood of v Depth(TC)
maxT?TC Depth(T) Overlap(TC) maxv trees
containing v
96
Tree covers
  • Algorithm TreeCover(G,k,l)
  • Construct l-neighborhood cover of G, S
    Gsl(V)
  • Compute a coarsening cover R for S as in
    Max-Deg-Cover Thm, with parameter k
  • Select in each cluster R?R an SPT T(R) rooted at
    some center of R and spanning R
  • Set TC(k,l) T(R) R?R

97
Tree covers (cont)
  • Thm
  • For every graph G(V,E,w), integers k,l 1,
  • there is an l-tree cover TCTC(k,l) with
  • Depth(TC) (2k-1)l
  • Overlap(TC) d2kn1/ke

98
Tree covers (cont)
Proof 1. TC built by Alg. TreeCover is l-tree
cover Consider v?V R coarsens S ? there is a
cluster R?R s.t. Gl(v)?R ? tree T(R)?TC covers
l-neighborhood Gl(v)
99
Tree covers (cont)
2. Bound on Depth(TC) follows from radius bound
on clusters of cover R, guaranteed by
Max-Deg-Cover Thm, as these trees are SPT's. 3.
Bound on Overlap(TC) follows from degree bound
on R (Max-Deg-Cover Thm), as Sn
100
Tree covers (cont)
  • Relying on Theorem and Tree Embedding Thm, and
    taking mn1/k
  • Corollary
  • For every graph G(V,E,w), integers k,l 1,
  • there is a (virtual) l-tree cover TCTC(k,l) for
    G,
  • with
  • Depth(TC) (2k-1)2l
  • Overlap(TC) d2kn1/ke,
  • D(T) 2n1/k for every tree T?TC

101
Tree covers (cont)
Motivating intuition a tree cover TC constructed
for a given cluster-based cover S serves as a way
to materialize or implement S
efficiently. (In fact, applications employing
covers actually use the corresponding tree
cover)
102
Sparse spanners for unweighted graphs
Basic lemma For unweighted graph
G(V,E), subgraph G' is a k-spanner of G ? for
every (u,v)?E, distG'(u,v) k (No need to
look at the stretch of each pair u,v suffices
to consider the stretch of edges)
103
Sparse spanners for unweighted graphs
  • Algorithm Unweighted_Span(G,k)
  • Set initial partition S Gs0(V) v v?V
  • Build coarsening T using Alg. BasicPart

104
Algorithm UnweightedSpan(G,k) - cont
3. For every cluster Ti?T construct SPT rooted at
some center ci of Ti 4. Add all edges of these
trees to spanner G'
105
Algorithm UnweightedSpan(G,k) - cont
5. In addition, for every pair of neighboring
clusters Ti,Tj - select a single intercluster
edge eij, - add it to G'
106
Analysis
Thm For every unweighted graph G, k1, there is
an O(k)-spanner with O(n11/k) edges
107
Analysis (cont)
  • (a) Estimating edges in spanner
  • T is a partition of V
  • ? edges of trees built for T clusters n
  • Av-Deg-Partition Thm
  • ? intercluster edges n11/k

108
Analysis (cont)
(b) Bounding the stretch Consider edge e(u,w)
in G (recall enough to look at edges) e was
selected to the spanner ? stretch 1
So suppose e is not in the spanner.
109
Analysis (cont)
Case 1 endpoints u,w belong to same cluster Ti
Clusters have radius r ? k-1
? length of path from u to w through center ci
2r
110
Analysis (cont)
Case 2 endpoints belong to clusters u?Ti, w?Tj
These clusters are connected by an inter-cluster
edge
111
Analysis (cont)
There is a u-w path from u to ci ( r steps),
from ci through eij to cj ( r1r steps), from
cj to w ( r steps)
? total length 4r1 4k-3
112
Stretch factor analysis
Fixing klog n we get Corollary For every
unweighted graph G(V,E) there is an O(log
n)-spanner with O(n) edges.
113
Lower bounds
Def Girth(G) edges of shortest cycle in G
Girth 8
Girth 3
Girth 4
114
Lower bounds
Lemma For every k 1, for every unweighted
G(V,E) with Girth(G) k2, the only k-spanner of
G is G itself (no edge can be erased from G)
115
Lower bounds
Proof Suppose, towards contradiction, that G has
some spanner G' in which the edge e(u,v) ? E is
omitted ? G' has alternative path P of length k
from u to v
? P?e cycle of length k1 lt
Girth(G) Contradiction.
116
Size and girth
  • Lemma
  • For every r1 and n-vertex, m-edge graph G(V,E)
    with girth Girth(G) r,
    m n12/(r-2) n
  • For every r3, there are n-vertex, m-edge graphs
    G(V,E) with girth Girth(G) r and m
    n11/r / 4

117
Lower bounds (cont)
Thm For every k3, there are graphs G(V,E) for
which every (k-2)-spanner requires W(n11/k) edges
118
Lower bounds (cont)
Corollary For every k3, there is an unweighted
G(V,E) s.t. (a) for every cover T coarsening
Gs1(V), if Rad(T) k then AvD(T)
W(n1/k) (b) for every partition T coarsening Gs0
(V), if Rad(T) k then AvDc(T) W(n1/k)
119
Lower bounds (cont)
Similar bounds implied for average degree
partition problem and all maximum degree
problems. The radius - chromatic number tradeoff
for network decomposition presented earlier is
also optimal within factor k Lower bound on
radius-degree tradeoff for l-regional
matchings on arbitrary graphs follow similarly
120
Examples
General picture
larger k ? sparser spanner
Restricted graph families Behave better Graph
classes with O(n) edges have (trivial) optimal
spanner (includes common topologies such as
bounded-degree and planar graphs - rings, meshes,
trees, butterflies, cube-connected
cycles,)
121
Spanners for weighted graphs
Algorithm Weighted_Span(G,k) 1. For every 1 lt i
lt logD construct 2i-tree-cover TC(k,2i) for G
using Alg. TreeCover 2. Take all edges of tree
covers into spanner G'(V,E')
122
Spanners for weighted graphs (cont)
  • Lemma
  • Spanner G' built by Alg. Weighted_Span(G,k)
  • has
  • Stretch(G') 2k-1
  • O(logDkn11/k) edges

123
Greedy construction
Algorithm GreedySpan(G,k) / Generalization of
Kruskal's MST algorithm /
  • Sort E by nondecreasing edge weight,
  • get Ee1,...,em
  • (sorted w(ei) w(ei1))
  • Set E' ? (spanner edges)

124
Greedy construction
  • Scan edges one by one,
  • for each ej(u,v) do
  • Compute P(u,v) shortest path from u to v
    in G'(V,E')
  • If w(P(u,v)) gt kw(ej)
  • (alternative path is too long)
  • then E' ? E' ej
  • (must include e in spanner)
  • Output G'(V,E')

125
Analysis
Lemma Spanner G' built by Algorithm
GreedySpan(G,k) has Stretch(G') k Proof
Consider two vertices x,y of G Px,y
(e1,...,eq) shortest x - y path in G
126
Analysis (cont)
Consider edge ej(u,v) along Px,y ej not
included in G' ? when ej was examined by the
algorithm, E' contained a u - v path Pj P(u,v)
of length kw(ej)
Pj
127
Analysis (cont)
This path exists in final G' ? To mimic the path
Px,y in G' replace each missing edge ej (not
taken to G') by its substitute Pj
Resulting path has total length kw(Px,y)
128
Analysis (cont)
Lemma Spanner has Girth(G') gt k1 Proof
Consider cycle C in G'. Let ej(u,v) be last edge
added to C by alg. When algorithm examined ej,
the spanner E' already contained all other C
edges ? the shortest u - v path Pj constructed by
the algorithm satisfies w(Pj) w(C-ej)
129
Analysis (cont)
ej added to E' ? w(Pj) gt kw(ej) (by selection
rule)
? w(C) gt (k1)w(ej) (C Pj ? ej )
ej heaviest edge in C ? w(C) Cw(ej) ? C
gt k1
C
130
Analysis (cont)
Recall For every r 1, every graph G(V,E)
with Girth(G) r has E n12/(r-2) n
Corollary E' n12/k n Thm For every
weighted graph G(V,E,w), k 1, there is an
(2k1)-spanner G'(V,E') s.t. E' lt ndn1/ke
131
Shallow Light Trees
Goal Find spanning tree T near-optimal in both
depth and weight Candidate 1 SPT Problem ?G
s.t. w(SPT) W(nw(MST))
Example G
Heavy SPT
MST
132
Shallow Light Trees (cont)
Candidate 2 MST Problem ?G s.t. Depth(MST)
W(nDepth(SPT))
Example G
Deep MST
SPT
133
Shallow Light Trees (cont)
  • Shallow-light tree (SLT)
  • for graph G(V,E,w) and root r0
  • Spanning tree T satisfying both
  • Stretch(T,r0) O(1)
  • w(T) / w(MST) O(1)
  • Thm Shallow-light trees exist
  • for every graph G and root r0

134
Light, sparse, low-stretch spanners
  • Algorithm GreedySpan guarantees
  • Thm For every graph G(V,E,w), integer k1,
  • there is a spanner G'(V,E') for G with
  • Stretch(G') lt 2k1
  • E' lt ndn1/ke
  • w(G') w(MST(G))O(n1/k)

135
Lower bound
  • Thm For every k 3, there are graphs G(V,E,w)
    s.t. every spanner G'(V,E') for G with
  • Stretch(G') k-2 requires
  • E' W(n11/k) and
  • w(G') W(w(MST(G))n1/k)
  • Proof
  • By bound for unweighted graphs

136
Part 3 Constructions and Applications
  • Distributed construction of basic partition
  • Fast decompositions
  • Exploiting topological knowledge
    broadcast revisited
  • Local coordination
    synchronizers revisited
  • Hierarchical example
    routing revisited
  • Advanced symmetry breaking
    MIS revisited

137
Basic partition construction algorithm
Simple distributed implementation for Algorithm
BasicPart Single thread of computation (single
locus of activity at any given moment)
138
Basic partition construction algorithm
Components ClusterCons Procedure for
constructing a cluster around a chosen center
v NextCtr Procedure for selecting the next
center v around which to grow a cluster RepEdge
Procedure for selecting a representative
inter-cluster edge between any two adjacent
clusters
139
Analysis
Thm Distributed Algorithm BasicPart
requires Time O(nk) Comm O(n2)
140
Efficient cover construction algorithms
Goal Fast distributed algorithm for coarsening
a neighborhood cover Known Randomized
algorithms for constructing low (average or
maximum) degree cover of G, guaranteeing bounds
on weak cluster diameter
141
Efficient decompositions
  • Goal fast distributed algorithms for
    constructing a network decomposition
  • Basic tool
  • s-separated, r-ruling set ((s,r)-set)
  • (Combinaton of independent dominating set)
  • Ww1,...,wm ? V in G s.t.
  • dist(wi,wj) s, ?1iltjm,
  • for every v?W, ?1im s.t. dist(wi,v) r

142
Efficient decompositions
(s,r)-partition (associated with (s,r)-set W
w1,...,wm ) Partition of G, S(W)
S1,...,Sm, s.t. ?1im
  • wi?Si
  • Rad(wi,G(Si)) r

143
Distributed construction
Using an efficient distributed construction for
(3,2)-sets and (3,2)-partitions and a recursive
coloring algorithm, one can get
Thm There is a deterministic distributed
algorithm for constructing (2e,2e)-decomposition
for given n-vertex graph in time O(2e), for e
p(c log n), for some constant cgt0
144
Exploiting topological knowledge Broadcast
revisited
Delay measure When broadcasting from source
s, message delivery to node v suffers delay ? if
it reaches it after ?dist(s,v) time. For
broadcast algorithm B Delay(B) maxv
Delay(v,B)
145
Broadcast on a subgraph
  • Lemma
  • Flood(G') broadcast on subgraph G' costs
  • Message(Flood(G')) E(G')
  • Comm(Flood(G')) w(G')
  • Delay(Flood(G')) Stretch(G')
  • (in both synchronous and asynchronous models)

146
Broadcast (cont)
  • Selecting an appropriate subgraph
  • For spanning tree T
  • Message(Flood(T)) n-1 (optimal)
  • Comm(Flood(T)) w(T)
  • Delay(Flood(T)) Stretch(T,r0)
  • Goal Lower both w(T) and Stretch(T,r0)

147
Broadcast (cont)
  • Using light, low-stretch tree (SLT)
  • Lemma For every graph G, source v,
  • there is a spanning tree SLTv
  • s.t. broadcast by Flood(SLTv) costs
  • Message(Flood(SLTv)) n-1
  • Comm(Flood(SLTv)) O(w(MST))
  • Delay(Flood(SLTv)) O(1)

148
Broadcasting on a spanner
Disadvantage of SLT broadcast Tree efficient
for broadcasting from one source is poor for
another, w.r.t Delay Solution 1 Maintain
separate tree for every source (heavy memory /
update costs, involved control)
149
Broadcasting on a spanner
  • Solution 2 Flood(G') broadcast on spanner G'
  • Recall For every graph G(V,E,w), integer k1,
  • there is a spanner G'(V,E') for G with
  • Stretch(G') 2k1
  • E' n11/k
  • w(G') w(MST(G))O(n1/k)

150
Broadcasting on a spanner (cont)
  • Setting klog n
  • Thm For every graph G, there is a spanner G',
  • s.t. Algorithm Flood(G') has complexities
  • Message(Flood(G')) O(n log n logD)
  • Comm(Flood(G')) O(log n logD w(MST))
  • Delay(Flood(G')) O(log n)
  • (optimal up to polylog factor in all 3 measures)

151
Topology knowledge and broadcast
Assumption No predefined structures exist in G
(Broadcast performed from scratch) Focus on
message complexity Extreme models of topological
knowledge KT8 model Full knowledge Vertices
have full topological knowledge
152
Topology knowledge and broadcast
  • KT8 model Full topological knowledge
  • ? broadcast with minimal messages,
  • MessageQ(n)
  • Each v locally constructs same tree T, sending no
    messages
  • Use tree broadcast algorithm Flood(T)

153
Topology knowledge and broadcast
KT0 model Clean network Vertices know
nothing on topology KT1 model Neighbor
knowledge Vertices know own neighbor ID's,
nothing else
154
Topology knowledge msg complexity
Lemma In model KT0, every broadcast algorithm
must send 1 message over every edge of G Proof
Suppose there is an algorithm P disobeying the
claim. Consider graph G and edge e(u,w) s.t. P
broadcasts on G without sending any messages
over e
155
Topology knowledge msg complexity
Then G can be replaced by G' as follows
?
156
Clean network model
? u and w cannot distinguish between the two
topologies G' and G
No msgs sent on e
No msgs sent on e1 , e2
?
157
Clean network model
? In executing algorithm P over G', u and w fail
to forward the message to u' and w'
x
x
158
Clean network model
? u' and w' do not get message, contradiction
x
x
159
Clean network model
Thm Every broadcast protocol P for the KT0 model
has complexity Message(P) W(E)
160
Msg complexity of broadcast in KT1
Note In KT1, previous intuition fails ! Nodes
know the IDs of their neighbors ? not all edges
must be used
161
Broadcast in KT1 (cont)
Traveler algorithm Traveler (token) performs
DFS traversal on G Traveler carries a list L of
vertices visited so far.
162
Broadcast in KT1 (cont)
To pick next neighbor to visit after v - Compare
L with list of v's neighbors, - Make next choice
only from neighbors not in L (If all v
neighbors were already visited, backtrack from
v on edge to parent.)
163
Broadcast in KT1 (cont)
Note Traveler's forward steps restricted to
the edges of the DFS tree spanning G non-tree
edges are not traversed
? No need to send messages on every edge !
164
Broadcast in KT1 (cont)
Q Does the traveler algorithm disprove the
W(E) lower bound on messages? Observe basic
(O(log n) bit) messages sent by algorithm Q(n2)
gtgt 2n (the lists carried by the traveler contain
up to O(n) vertex ID's) ? traversing an edge
requires O(n) basic messages on average
165
W(E) lower bound for KT1
Idea To avoid traversing edge e(v,u) the
traveler algorithm must inform, say, v, that u
already got the message. This can only be done
by sending some message to u - as expensive as
traversing e itself Intuitively, edge e was
utilized, just as if a message actually crossed
it
166
Lower bound (cont)
  • Def Edge e(u,v) ? E is utilized during a run of
  • algorithm P on G if one of the following events
  • holds
  • A message is sent on e
  • u either sends or receives a message containing
    ID(v)
  • v either sends or receives a message containing
    ID(u)

167
Lower bound (cont)
m utilized edges in run of protocol P on G M
(basic) messages sent during run Lemma
MW(m) Proof Consider a message sent over
e(u,v). The message contains O(1) node ID's
z1,...,zB. Each zi utilizes 2 edges, (u,zi) and
(v,zi) (if exist). Also, e itself becomes
utilized.
168
Lower bound (cont)
  • ? To prove a lower bound on messages,
  • it suffices to prove a lower bound on
  • edges utilized by algorithm
  • Lemma Every algorithm for broadcast under the
    KT1 model must utilize every edge of G
  • Thm Every broadcast protocol P for the KT1 model
    has complexity Message(P) W(E)

169
Lower bound (cont)
Observation Thm no longer holds if, in addition
to arbitrary computations, we allow protocols
with time unbounded in network size. Once such
behavior is allowed, one may encode an unbounded
number of ID's by the choice of transmission
round, and hence implement, say, the traveler
algorithm. (This relates only to the synchronous
model In asynch model such encoding is
impossible!)
170
Hierarchy of partial topological knowledge
KTk model Known topology to radius k Every
vertex knows the topology of the neighborhood of
radius k around it, G(Gk(v))
Example In KT2, v knows the topology of
its 2-neighnorhood
171
Hierarchy of partial topological knowledge
KTk model Known topology to radius k Every
vertex knows topology of subgraph of radius k
around it, G(Gk(v)) Information-communication
tradeoff For every fixed k 1 basic messages
required for broadcast in the KTk model
Q(minE,n1Q(1)/k)
172
Hierarchy of partial topological knowledge
Lower bound proof Variant of KT1 case. Upper
bound Idea v knows all edges at distance k
from it ? v can detect all short cycles (length
2k) going through it ? Possible to disconnect
all short cycles locally, by deleting one edge
in each cycle.
173
KTk model
Algorithm k-Flood Assumption There is some
(locally computable) assignment of distinct
weights to edges
174
KTk model
  • Algorithm k-Flood
  • Define subgraph G(V,E) of G
  • Mark heaviest edge in each short cycle
    unusable,
  • include precisely all unmarked edges in E
    (Only e endpoints should know e is usable
  • ? Given partial topological knowledge, edge
    deletions done locally, sending no
    messages)

175
KTk model
  • Algorithm k-Flood (cont)
  • Perform broadcast by Alg. Flood(G) on G (I.e.,
    whenever v receives message first time,
    it sends it
    over all incident usable edges e?E )

176
Analysis
Lemma G connected ? G connected
too. Consequence of marking process defining
G All short cycles are disconnected Lemma
Girth(G) 2k1
177
Analysis
Recall For every r1, graph G(V,E) with girth
Girth(G) r, E n12/(r-2) n Corollary
EO(n1c/k) for constant cgt0 Thm For every
G(V,E), k1, Algorithm k-Flood performs broadcast
in KTk model, with Message(k-Flood)O(minE,n1c
/k) (fixed cgt0)
178
Synchronizers revisited
  • Recall
  • Synchronizers enable transforming
  • an algorithm for synchronous networks into
  • an algorithm for asynchronous networks.
  • Operate in 2 phases per pulse

Phase A (of pulse p) Each processor learns (in
finite time) that all messages it sent during
pulse p have arrived (it is safe)
Phase B (of pulse p) Each processor learns that
all its neighbors are safe w.r.t. pulse p
179
Learning neighbor safety
Safe
Ready
180
Synchronizer costs
Goal Synchronizer capturing reasonable middle
points on time-communication tradeoff scale
181
Synchronizer g
Assumption Given a low-degree partition S
  • Rad(S) k-1,
  • inter-cluster edges in G(S) n11/k

182
Synchronizer g (cont)
For each cluster in S, build rooted spanning tree.
In addition, between any two neighboring clusters
designate a synchronization link.
synchronization link
spanning tree
183
Handling safety information (in Phase B)
Step 1 For every cluster separately, apply
synchronizer b (By end of step, every node knows
that every node in its cluster is safe)
my_subtree_safe
cluster_safe
184
Handling safety information (in Phase B)
Step 2 Every node incident to a synchronization
link sends a message to the other cluster,
saying my cluster is safe
my_cluster_safe
185
Handling safety information (in Phase B)
  • Step 3
  • Repeat step 1, but the convergecast performed in
    each cluster carries different information
  • Whenever v learns all clusters neighboring its
    subtree are safe, it reports this to parent.

all_clusters_adjacent_to_my_subtree_are_safe
186
Handling safety information (in Phase B)
Step 4 When root learns all neighboring
clusters are safe, it broadcasts start new
pulse on tree
(By end of step, every node knows that all its
neighbors are safe)
all_neighboring_ clusters_are_safe
187
Analysis
  • Claim Synchronizer g is correct.
  • Claim
  • Cpulse(g) O(n11/k)
  • Tpulse(g) O(k)

188
Analysis (cont)
Proof Time to implement one pulse 2
broadcast / convergecast rounds in clusters ( 1
message-exchange step among border vertices in
neighboring clusters) ? Tpulse(g) 4 Rad(T) 1
O(k)
189
Complexity
Messages Broadcast / convergecast rounds,
separately in each cluster, cost O(n) messages
in total (clusters are disjoint) Communication
step among neighboring clusters requires
nAvDc(T) O(n11/k) messages ? Cpulse(g)
O(n11/k)
190
Synchronizer d
Assumption Given a sparse k-spanner G'(V,E')
G'(V,E')
191
Synchronizer d (cont)
Handling safety information (in Phase B)
  • When v learns it is safe for pulse p
  • For k rounds do
  • Send safe to all spanner neighbors
  • Wait to hear same from these neighbors

192
Synchronizer d
Lemma For every 1ik, once v completes i
rounds, every node u at distance dist(u,v,G)
i from v in the spanner G' is safe
193
Analysis
Proof By induction on i. For i0
Immediate. For i1 Consider the time v finishes
(i1)st round.
194
Analysis
? v received i1 messages safe from its
neighbors in G'
These neighbors each sent their (i1)st
message only after finishing their ith round
195
Analysis
By inductive hypothesis, for every such neighbor
u, every w at distance i from u in G' is safe.
? Every w at distance i1 from v in G' is
safe too
196
Analysis (cont)
Corollary When v finishes k rounds, each
neighbor of v in G is safe (v is ready for
pulse p1) Proof By lemma, at that time, every
processor u at distance k from v in G' is
safe. By def of k-spanners, every neighbor of v
in G is at distance k from v in G'. ? every
neighbor is safe.
197
Analysis (cont)
  • Lemma If G has k-spanner with m edges,
  • then it has synchronizer d with
  • Tpulse(d)O(k)
  • Cpulse(d)O(km)

198
Summary
On a general n-vertex graph, for parameter k1
a b g d Cpulse
O(E) O(n) O(n11/k)
O(kn11/k) Tpulse O(1) O(Diam) O(k)
O(k)
199
Compact routing revisited
Tradeoff between stretch and space Any routing
scheme for general n-vertex networks achieving
stretch factor k1 must use W(n11/(2k4)) bits
of routing information overall (Lower bound
holds for unweighted networks as well, and
concerns total memory requirements)
200
Interval tree routing
Goal Given tree T, design routing scheme based
on interval labeling
Idea Label each v by integer interval
Int(v) s.t. for every two vertices u,v, Int(v) ?
Int(u) ? v is a descendent of u in T
201
Interval labeling
  • Algorithm IntLab on tree T
  • Perform depth-first (DFS) tour of T,
    starting at root Assign each u?T a
    depth-first number DFS(u)

202
Interval labeling (cont)
  • Algorithm IntLab on tree T
  • Label node u by interval DFS(u),DFS(w)
    w last descendent of u visited by DFS
  • (Labels contain d2log ne bits)

203
Interval tree routing
Data structures Vertex u stores its own label
Int(u) and the labels of its children in
T Forwarding protocol Routes along unique path
204
Interval tree routing
Lemma For every tree T(V,E,w), scheme ITR(T)
has Dilation(ITR,G)1 and uses O(D(T)log n) bits
per vertex, and O(n log n) memory in total
205
Interval tree routing (cont)
Forwarding protocol Routing M from u to v At
intermediate w along route Compare Int(v) with
Int(w)
  • Possibilities
  • Int(w) Int(v) (w v)
  • receive M.

206
Interval tree routing (cont)
  • Int(w) ? Int(v) (w descendent of v)
  • Forward M upwards to parent

v
w
207
Interval tree routing (cont)
  • Disjoint intervals (v, w in different subtrees)
  • Forward M upwards to parent

v
w
208
Interval tree routing (cont)
  • Int(v) ? Int(w) (v descendent of w)
  • Examine intervals of ws children,
  • find unique child w' s.t. Int(v) ? Int(w'),
  • forward M to w'

w
v
209
ITR for general networks
  • Construct shortest paths tree T for G,
  • Apply ITR to T.
  • Total memory requirements O(n log n) bits
  • Problems
  • - stretch may be as high as Rad(G),
  • - maximum memory per vertex
  • depend on maximum degree of T

210
Overcoming high max degree problem
  • Recall For every rooted tree T, integer m 1,
  • there is an embedded virtual tree S with same
    node set, same root (but different edge set),
    s.t.
  • D(S) 2m
  • Each edge of S corresponds to path of length 2
    in T
  • DepthS(v) (2logmD(T)-1) DepthT(v),
    for every v

211
Overcoming high max degree problem
  • Setting mn1/k, embed in T a virtual tree T' with
  • D(T') lt 2n1/k
  • Depth(T') lt (2k -1) Rad(G)

212
Overcoming high max degree problem
Lemma For every G(V,E,w), the ITR(T) scheme
guarantees message delivery in G with
communication O(Rad(G)) and uses O(n log n)
memory Problem stretch may be as high as Rad(G)
213
A regional (C,l)-routing scheme
  • For every u,v
  • If dist(u,v) l scheme succeeds in delivering M
    from u to v.
  • Else routing fails, M returns to u
  • Communication cost C.

214
A regional (C,l)-routing scheme
  • Recall For graph G(V,E,w), integers k,l 1,
  • there is an l-tree cover TCTC(k,l) with
  • Depth(TC) (2k-1)l
  • Overlap(TC) 2k n1/k
  • ? sum of tree sizes O(kn11/k)

215
Data structures
  1. Construct tree cover TC(k,l)
  2. Assign each tree T in TC(k,l) distinct Id(T)
  3. Set up interval tree routing component ITR(T) on
    each tree T ? TC(k,l)

216
Data structures
  • Recall Every v?V has home tree Thome(v) in
    TC(k,l), containing its entire l-neighborhood.
  • Scheme RSk,l
  • Routing label for v
  • Pair (Id(T),IntT(v)) where
  • Id(T) ID of v's home tree
  • IntT(v) v's routing label in ITR(T)

217
Data structures
Forwarding protocol Routing M from u to v with
label (Id(T),IntT(v)) Examine if u belongs to
T. - u not in T detect unknown destination
failure and terminate routing procedure. - u in
T send M using ITR(T) component
218
Analysis
Lemma For every graph G, integers k,l 1,
scheme RSk,l is a regional (O(kl),l)-routing
scheme and it uses O(kn11/klog n) memory
219
Analysis (cont)
Proof Stretch Suppose dist(u,v) l for some
u,v. By definition, v ? Gl(u). Let T home tree
of u. ? Gl(u) ? V(T) ? v ? T ? ITR(T)
succeeds. Also, path length O(Depth(T)) O(kl)
220
Analysis (cont)
Memory Each v stores O(D(T(C))log n) bits per
each cluster C?T to which it belongs, where T(C)
spanning tree constructed for C
? O(kn11/klog n) memory in total
221
Hierarchical routing scheme RSk
Data structures For 1 i logD construct a
regional (O(kli),li)-routing scheme RiRSk,l
for li2i Each v belongs to all regional schemes
Ri (has home tree homei(v) in each Ri and routing
label in each level, stores all info required for
each scheme)
222
Hierarchical routing scheme RSk
  • Routing label concatenation of regional labels
  • Forwarding protocol Routing M from u to v
  • Identify lowest-level regional scheme Ri usable
    (u first checks if it belongs to tree home1(v)
  • If not, then check second level, etc.)
  • Forward M to v on ITR(homei(v)) component of
    regional scheme Ri

223
Analysis
Lemma Dilation(RSk)O(k). Proof Suppose u
sends M to v. Let ddist(u,v) and j dlog
de (2j-1 lt d lt 2j) i lowest level s.t. u
belongs to v's home tree
224
Analysis
u must belong to homej(v) ? regional scheme Rj is
usable (if no lower level was) (Note
highest-level RlogD always succeeds) Comm(RSk,u,v
) r(RSk,u,v) ?ji1 O(k2i)
O(k2j1) O(k)dist(u,v)
225
Analysis (cont)
Thm For every graph G, integer k1,
hierarchical routing scheme RSk
has Dilation(RSk) O(k) Mem(RSk)
O(kn11/klognlogD)
226
Analysis (cont)
  • Proof
  • Memory required by hierarchical scheme
  • logD terms, each bounded by O(kn11/klogn)
  • ? total memory O(kn11/klognlogD) bits

227
Deterministic decomposition-based MIS
Assumption given a (d,c)-decomposition for G
plus coloring of clusters in cluster graph G MIS
computation c(G) phases Phase i computes MIS
among vertices belonging to clusters colored
i (These clusters are non-adjacent, so may
compute MIS for each independently, in parallel,
using PRAM-based distributed algorithm, in time
O(d log2 n).)
228
Deterministic MIS (cont)
Note A vertex joining the MIS must mark all
neighbors as excluded off MIS, including those
of other colors ? not all occupants of clusters
colored i participate in phase i - only those
not excluded in earlier phases
229
Deterministic MIS (cont)
  • Procedure DecompToMIS(d,c) - code for v
  • For phase i1 through c do
  • / Each phase consists of O(d log2n) rounds /
  • If v's cluster is colored i then do
  • If v has not decided yet (bv -1)
  • then compute MIS on cluster
  • using PRAM-based algorithm
  • If v joined MIS then inform all neighbors
  • Else if neighbor joined MIS then decide bv?0

230
Analysis
phases c(G)O(c) ? Time O(cdlog2n) Lemma
There is a deterministic distributed algorithm
that given colored (d,c)-decomposition for G,
computes MIS for G in time O(dclog2n) Recall
For every graph G, k 1, there is a
(k,kn1/k)-decomposition
231
Analysis (cont)
Taking klog n, we get
Corollary Given colored (log n,log n)-decomp
for G, there is a deterministic distributed MIS
algorithm with time O(polylog n)
Recall There is a deterministic algorithm for
computing a decomposition in time O(2e) for e
cp(log n), constant cgt0 Corollary There is a
deterministic distributed MIS algorithm with time
O(2p(c log n) )
232
Thank You for your attention
Write a Comment
User Comments (0)
About PowerShow.com