Phase Transitions in Random Geometric Graphs, with Algorithmic Implications - PowerPoint PPT Presentation

About This Presentation
Title:

Phase Transitions in Random Geometric Graphs, with Algorithmic Implications

Description:

Two points are connected if their Euclidean distance is less than r ... Known: there is a sharp threshold in r [Shakkottai, Srikant, Shroff '04] ... – PowerPoint PPT presentation

Number of Views:207
Avg rating:3.0/5.0
Slides: 29
Provided by: jessic130
Learn more at: http://jessica2.msri.org
Category:

less

Transcript and Presenter's Notes

Title: Phase Transitions in Random Geometric Graphs, with Algorithmic Implications


1
Phase Transitions in Random Geometric Graphs,
with Algorithmic Implications Ashish
GoelStanford University Joint work with
Sanatan Rai and Bhaskar Krishnamachari
http//www.stanford.edu/ashishg
2
Geometric Random Graphs
  • G(nr) in d-dimensions
  • n points uniformly distributed in 0,1d
  • Two points are connected if their Euclidean
    distance is less than r
  • Sensor networks can often be modeled as G(nr)
    with d2
  • Eg. sensors sprinkled from a helicopter over a
    corn field
  • The wireless radius corresponds to r
  • Question How should n and r be chosen to ensure
    that G(nr) has a desirable property P (eg.
    connectivity, 2-connectivity, large cliques) with
    high probability?

3
1
Any other point Y is a neighbor of X with
probability ?r2
Expected degree of X is ? r2 (n-1)
r
X
0
1
4
Thresholds for monotone properties?
  • A graph property P is monotone if, for all graphs
    G1(V,E1) and G2(V,E2) such that E1 µ E2,
  • G1 satisfies P ) G2 satisfies P
  • Informally, addition of edges preserves P
  • Examples connectivity, Hamiltonianicity, bounded
    diameter, expansion, degree k, existence of
    minors, k-connectivity .
  • Folklore Conjecture All monotone properties have
    sharp thresholds for geometric random graphs
  • Krishnamachari, PhD Thesis 02

5
Example Connectivity
  • Define c(n) such that p c(n)2 log n/n
  • Asymptotically, when d2
  • G(nc(n)) is disconnected with high probability
  • For any e gt 0, G(n (1e)c(n)) is connected whp
  • So, c(n) is a sharp threshold for
    connectivity at d2 Gupta and Kumar 98 Penrose
    97
  • Similar thresholds exist for all dimensions
  • cd(n) ¼ (log n/(nVd))1/d, where Vd is the volume
    of the unit ball in d dimensions
  • Average degree ¼ log n at the threshold

6
Width and sharp thresholds
  • For property P, and 0 lt e lt 1, if there exist two
    functions L(n) and U(n) such that
  • PrG(nL(n)) satisfies P e, and
  • PrG(nU(n)) satisfies P 1 - e,
  • then the e-width we(n) of P is defined as
    U(n)-L(n)
  • If we(n) o(1) for all e, then P is said to have
    a sharp threshold

7
Example
PrG(nr) satisfies P
1
1-e
e
r
0
Width
8
Connections (?) to Bernoulli Random Graphs
  • Famous graph family G(np)
  • Also known as Erdos-Renyi graphs
  • Edges are iid each edge present with probability
    p
  • Connectivity threshold is p(n) log n/n
  • Average degree exactly the same as that of
    geometric random graphs at their connectivity
    threshold!!
  • All monotone properties have e-width O(1/log n)
    for any fixed e in the Bernoulli graph model
  • Friedgut and Kalai 96
  • Can not be improved beyond O(1/log2 n)
  • Almost matched Bourgain and Kalai 97
  • Proof relies heavily on independence of edges
  • There is no edge independence in geometric random
    graphs gt we need new techniques

9
Our results
  • The e-width of any monotone property is
  • Sharp thresholds in the geometric random graph
    model
  • Sharper transition (inverse polynomial width)
    than Bernoulli random graphs (inverse logarithmic
    width)
  • There exist monotone properties with width
  • Tight for d1, sub-logarithmic gap for dgt1

10
Why cd(n)?
  • Why express results in terms of cd(n)?
  • Width gives a sharp additive threshold
  • We are typically interested in properties that
    subsume connectivity
  • For such properties, an additive threshold in
    terms of cd(n) also corresponds to a
    multiplicative threshold
  • The exact sharpness of the multiplicative
    threshold depends on L(n) and on the exact
    additive bounds (details omitted)

11
Bottleneck Matchings
  • Draw n blue points and n red points uniformly
    and independently from 0,1d
  • Bn,Rn denotes the set of blue, red points resp.
  • A minimum bottleneck matching between Rn and Bn
    is a one-one mapping
  • fBn ! Rn
  • which minimizes
  • maxu2 Bnf(u)-u2
  • The corresponding distance (maxu2 Bnf(u)-u2)
    is called the minimum bottleneck distance
  • Let Xn denote this minimum bottleneck distance

12
Example
Bottleneck distance g
?
13
Bottleneck Matchings and Width
  • Theorem If PrXn gt g p then the sqrt(p)-width
    of any monotone property is at most 2g
  • Implication Can analyze just one quantity, Xn,
    as opposed to all monotone properties (in
    particular, can provide simulation based
    evidence)
  • Proof Let P be any monotone property
  • Let e sqrt(p)
  • Choose L(n) such that PrG(nL(n) satisfies P
    e
  • Define U(n) L(n) 2g
  • Draw two random graphs GL and GU (independently)
    from G(nL(n)) and G(nU(n)), resp.
  • Let Bn, Rn denote the set of points in GL, GU
    resp.

14
Bottleneck Matchings and Width (proof contd.)
  • Assume Xn g.
  • Let f be the corresponding minimum bottleneck
    matching between Rn and Bn.
  • For any u,v 2 Bn
  • f(u)-f(v)2 f(u)-u2 u-v2
    f(v)-v2 2g u-v2
  • Hence, (u,v) is an edge in GL ) (f(u),f(v)) is an
    edge in GU
  • i.e. GL is a subgraph of GU
  • By definition, PrXn gt g p
  • ) PrGL is not a subgraph of GU p ?2 (1)

15
Illustration I Triangle Inequality
16
Bottleneck Matchings and Width (proof contd.)
  • PrGL is not a subgraph of GU p ?2 (1)
  • Let q PrGU does not satisfy P
  • P is monotone, PrGL satisfies P e,
  • ) PrGL is not a subgraph of GU eq (2)
  • Combining (1) and (2), we get eq p i.e. q e
  • Therefore, PrGU satisfies ? 1-?
  • i.e. the ?-width of ? is at most U(n) L(n) 2?
  • Done!

17
Illustration II Probability Amplification
GU
GL
18
Our Goal now Analyze the bottleneck matching
distance Xn Specifically, we are done if Xn
O(g(n)) with high probability, for some small
g(n)
19
Comparison with Bernoulli Random Graphs?
  • We are attempting to show something quite strong
  • G(nr) is a subgraph of G(nrg) whp, for small g
  • Laminar structure
  • Corresponding result is NOT true for Bernoulli
    random graphs even for ? ½
  • If small bottleneck matchings exist whp, we will
    get stronger thresholds than for Bernoulli random
    graphs

20
Existence of Small Bottleneck Matchings
  • The bottleneck matching length is
  • O(cd(n)) whp for d 3
  • Shor and Yukich 1991 we present a simpler
    proof
  • O(c2(n) log1/4n) whp for d 2
  • Leighton and Shor 1989
  • O (sqrt(log(1/e))/sqrt(n)) with probability 1-e
    for d 1
  • Our paper (folklore?)
  • This gives us the desired widths
  • We will now present the main idea behind our d1
    and d 3 proofs

21
Demonstrating Small Bottleneck Matchings d1
  • The Stretch-Shrink-Divide Algorithm
  • Let h x1, x2,, xni be the coordinates of the red
    points, in increasing order (assume n 2k)
  • The coordinates are uniformly distributed in
    0,1
  • Multiply the first n/2 coordinates by
  • 1/(2xn/21)
  • The first n/2 coordinates are now uniformly
    distributed in 0,1/2
  • Let d1 denote 1/2 xn/21. No point in the
    left half moves by more than d1
  • Perform a symmetric transformation on the last
    n/2 coordinates (now uniform in 1/2,1)
  • Two regions of equal size and equal density
  • Recurse

22
For higher d
  • Divide using each coordinate in turn
  • After d steps, we have 2d sets of n/2d points,
    each set uniformly distributed in cubes of side
    ½.
  • After log n steps, there is a red point in each
    cell of a uniform grid superimposed on the unit
    cube.
  • Run the same algorithm on the blue points
  • The (unique) red and blue point in each cell are
    then matched to each other

23
Analysis The Basic Idea
  • Consider d1 1/2 - xn/21.
  • Intuition d1 looks like a normal variable
  • Lemma Probd1 ab n-1/2 exp(-b2) for an
    appropriate constant a
  • Recursive application of this lemma at different
    scales gives tight results for d1, d 3
  • Details omitted
  • Shor and Yukich used a similar recursion but did
    not re-uniformize, resulting in a more complex
    proof

24
Lower bound examples
  • For d1, the property
  • P G 8 v2 V(G), degree(v) V(G)/4
  • has width W(sqrt(log 1/e)/sqrt(n))
  • Basic idea Just the two endpoints on the line
    are interesting for the purpose of finding the
    minimum degree
  • For d 2, the property P G is a cliquehas
    width W(1/n1/d)
  • Open problem plug the gap in the upper/lower
    bounds on the width for d 2
  • Also, all our lower bound examples undergo phase
    transitions at r Q(1). Is there something
    interesting and different in the region where r
    is of the order of the connectivity threshold?

25
Implications Mixing Time
  • Recent result Fastest mixing Markov chain
    defined on G(nr) has mixing time Q(r-2 log n)
    for large enough r Boyd, Ghosh, Prabhakar, Shah
    05
  • Alternate proof
  • GRID(nr) n points are laid on a grid in 0,12
    and two points are connected if they are within
    distance r.
  • Fastest mixing time of GRID(nr) Q(r-2log n)
    Trivial
  • G(nr) is a super-graph of GRID(nr-d) and a
    sub-graph of G(nrd) whp for small enough d Our
    result
  • ) Fastest mixing time of G(nr) is Q(r-2log n)
    whp

26
Implications Spectra
  • Our techniques can be extended to show that the
    spectrum of random geometric graphs converges to
    the spectrum of the grid graph. Rai 05

27
Implications Coverage
  • Coverage Any point in the unit square must be
    within a distance r from one of the n sensors
  • Known there is a sharp threshold in r
    Shakkottai, Srikant, Shroff 04
  • Coverage is NOT a graph property, so it does not
    fall within our framework
  • But the laminar structure in our proof implies a
    sharp threshold for coverage as well (weaker than
    the sharpest known)

28
Conclusions
  • Monotone properties in G(nr) have sharp
    thresholds
  • Much sharper than for Bernoulli Random Graphs
  • Much stronger too Random geometric graphs
    exhibit a laminar structure
  • Useful for recovering several known
    results/proving new ones
  • Randomness is often a red-herring since the
    deterministic grid often yields asymptotically
    tight upper and lower bounds
  • Open problem Does laminarity imply anything
    about throughput (via separators)?
Write a Comment
User Comments (0)
About PowerShow.com