Tapestry:%20Finding%20Nearby%20Objects%20in%20Peer-to-Peer%20Networks - PowerPoint PPT Presentation

About This Presentation
Title:

Tapestry:%20Finding%20Nearby%20Objects%20in%20Peer-to-Peer%20Networks

Description:

b=16 match one hex digit or four bits at a time. Space: O(b logb n) ... Paris. New Result. Pick digit size based on local measurements. Don't need to guess ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 27
Provided by: kh1
Category:

less

Transcript and Presenter's Notes

Title: Tapestry:%20Finding%20Nearby%20Objects%20in%20Peer-to-Peer%20Networks


1
Tapestry Finding Nearby Objects in Peer-to-Peer
Networks
  • Joint with
  • Ling Huang
  • Anthony Joseph
  • Robert Krauthgamer
  • John Kubiatowicz
  • Satish Rao
  • Sean Rhea
  • Jeremy Stribling
  • Ben Zhao

2
Object Location
3
Behind the Cloud
4
Why nearby?(DHT vs. DOLR)
  • Nearby low stretch, ratio of distance traveled
    to find object to distance to closest copy of
    object
  • Objects are services, so distance isnt one-time
    cost (see COMPASS)
  • (smart) publishers put objects at chosen
    locations in network
  • Bob Miller places retreat schedule at node in
    Berkeley
  • Wildly popular objects

5
Well-Placed Objects
6
Popular Objects
7
Two solutions
  • One directory
  • One participant stores a lot, but average is low
  • Stretch is high
  • Everyone has a directory
  • Low stretch!
  • Everyone has lots of data
  • Changes are very expensive

8
Distribute directory
  • Each peer is responsible for part of directory
  • Q Who is responsible for my object?
  • A Hash peers into same namespace as objects.
  • Q How do I get there?
  • A Peer-to-peer networks route to names.
  • ? Must efficiently self-organize into network

9
Efficiency is
  • With respect to network
  • Few neighbors
  • Short routes (few hops low stretch)
  • Easy routing (No routing tables!)
  • Dynamic
  • With respect to directory
  • Not too many entries

hard
(4,4)
easy
10
Outline
  • Low stretch dynamic peer-to-peer network
  • Tolerate failures in network
  • Adapting to network variation
  • Future work

11
Distributed Hash Tables
System Neighbors Motivating Structure Hops
CAN, 2001 O(r) grid O(rn1/r)
Chord, 2001 O(log n) hypercube O(log n)
Pastry, 2001 O(log n) hypercube O(log n)
Tapestry, 2001 O(log n) hypercube O(log n)
  • These systems give
  • Guaranteed location
  • Join and leave algorithms
  • Load-balanced storage
  • No stretch guarantees

12
Low Stretch Approaches
System Stretch Space Balanced Metric
Awerbuch Peleg, 1991 polylog polylog no General
PRR, 1997 O(1) O(log n) yes Special
Thorup-Zwick O(k2) O(kn1/k) yes General
RRVV, 2001 polylog polylog yes General
  • Not dynamic

Tapestry is first dynamic low-stretch scheme
13
PRR/Tapestry
Country
State
City
14
PRR/Tapestry
Level 3
Level 2
Level 1
Two object types red and blue, so two trees
15
Balancing Load
1
NodeID 5123
16
Building It
  • Every node is a root!
  • Pefix routing in base b ? promoting at rate 1/b
  • b16 ? match one hex digit or four bits at a time
  • Space O(b logb n)
  • Every node at every level stores b parents

We want the links to be of geometrically
increasing length
17
Big Challenge Joining Nodes
Theorem 1 HKRZ02 When peer A is finished
inserting, it knows about all relevant peers that
have finished insertion.
18
Results
  • Correctness O(log n) insert delete
  • Concurrent inserts in a lock-free fashion
  • Neighbor-search routine
  • Required to keep low stretch
  • All low-stretch schemes do something like this
  • Zhao, Huang, Stribling, Rhea, Joseph
    Kubiatowicz (JSAC)
  • This works! Implemented algorithms
  • Measured performance

19
Neighbor Search
  • In growth-restricted networks (with no additional
    space!)
  • Theorem 2 HKRZ02 Can find nearest neighbor with
    high probability with O(log2 n) messages
  • Theorem 3 HKMR04 Can find nearest neighbor, and
    messages is O(log n) with high probability

20
Algorithm Idea, in pictures
21
Outline
  • Low stretch dynamic peer-to-peer network
  • Tolerate failures in network
  • Adapting to network variation
  • Future work

22
Behind the Cloud Again
23
Dealing with faults
  • Multiple paths
  • Castro et. al
  • One failure along path, path breaks
  • Wide path
  • Paths faulty at the same place to break
  • Exponential difference in width effect
  • retrofit Tapestry to do latter in slightly
    malicious networks

Failed!
Still good
24
Effective even for small overhead
Theorem 4 In growth restricted spaces, can make
probability of failed route less than 1/nc for
width O(clog n) Hildrum Kubiatowicz, DISC02
25
Wide path vs. multiple paths
26
Outline
  • Low stretch dynamic peer-to-peer network
  • Tolerate failures in network
  • Adapting to Network Variation
  • Future work

27
Digit size affects performance
28
Network not homogeneous
  • Previous schemes picked a digit size
  • How do we find a good one?
  • But what if there isnt one?

Nebraska
San Francisco
Paris
29
New Result
  • Pick digit size based on local measurements
  • Dont need to guess
  • Vary digit size depending on location
  • No, its not obvious that this works, but it
    does!
  • Hildrum, Krauthgamer Kubiatowicz SPAA04
  • Dynamic, locally optimal low-stretch network

30
Conclusions and Future Work
  • Conclusion
  • Low stretch object location is practical
  • System provably good HKRZ02
  • System built ZHSJK
  • Open Questions
  • Do we need a DOLR?
  • Object placement schemes? Workload?
  • Examples where low stretch, load balance, and low
    storage not possible simultaneously
  • What is tradeoff between degree, stretch, load
    balance as function of graph?
  • Can we get best possible? Trade off smoothly?

31
Tapestry People
  • Ling Huang
  • Anthony Joseph
  • John Kubiatowicz
  • Sean Rhea
  • Jeremy Stribling
  • Ben Zhao
  • andOceanStore group members
Write a Comment
User Comments (0)
About PowerShow.com