Tapestry: A Resilient Global-Scale Overlay for Service Deployment - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

Tapestry: A Resilient Global-Scale Overlay for Service Deployment

Description:

Tapestry: A Resilient Global-Scale Overlay for Service Deployment. Ben Y. Zhao ... Tapestry Upcall Interface. Delivery(G, ... Tapestry Implementation. Evaluation ... – PowerPoint PPT presentation

Number of Views:35
Avg rating:3.0/5.0
Slides: 30
Provided by: duck6
Category:

less

Transcript and Presenter's Notes

Title: Tapestry: A Resilient Global-Scale Overlay for Service Deployment


1
Tapestry A Resilient Global-Scale Overlay for
Service Deployment
  • Ben Y. Zhao
  • IEEE JOURNAL ON SELECTION AREAS IN
    COMMUNICATIONS, VOL.22, NO. 1, JANUARY 2004

2
Introduction of Tapestry
  • Tapestry is a peer-to-peer overlay network that
    provides
  • High performance
  • Scalable
  • Location-independent routing of messages
  • Using only local resource

3
Objective of this paper
  • Provide decentralized object location and routing
    (DOLR)
  • Message routing
  • Object Replication
  • Fault Tolerance
  • Object Placement

4
Tapestry Routing Mesh
Nid
Outgoing Link
5
Path of a Message
Aid or GUID OG (42AX)
GR
6
Tapestry Object Publish Example
GR
GUID
7
Tapestry Route to Object
8
Message Processing
9
Tapestry Component Architecture
IP Address, Port, Nid
TCP/UDP
10
DOLR Network API
  • PublishObject (OG, Aid)
  • Best-effort, Receivers no confirmation.
  • UnpublishObject (OG, Aid)
  • Best-effort.
  • RouteToObject (OG, Aid)
  • Route message to location of an object with GUID
    OG.
  • RouteToNode (N, Aid, Exact)
  • Route message to application Aid on node N.

11
Tapestry Upcall Interface
  • Delivery(G, Aid, Msg)
  • Forward(G, Aid, Msg)
  • Route(G, Aid, Msg, NextHopNode)

12
Node Insertion
  1. Need-to-know nodes are notified of N.
  2. N might become the new object root.
  3. Construct routing table for N.
  4. Nodes near N are notified and may consider using
    N in their routing tables.

13
Node Delete
  • Voluntary Node Delete
  • Notify the set D of Ns backpointers.
  • Join republish traffic to N its replacements.
  • Involuntary Node Delete
  • Send beacons to detect outgoing link and node
    faulures.
  • Build redundancy into routing tables.

14
Tapestry Implementation
15
Evaluation
  • We post our wild-area experiments on PlanetLab, a
    network testbed consisting of roughly 100
    machines at institutions in North America,
    Europe, Asia, and Australia.
  • In large-scale topology, we perform experiments
    using the Simple OceanStore Simulator(SOSS).

16
(No Transcript)
17
(No Transcript)
18
(No Transcript)
19
400 Nodes, 62 PlanetLab Machines
All pairs round-trip routing latency (ms)
Percentile(Array, K)?????,???K???????
20
Relative delay penalty (RDP), Min distance
between the source and that endpoint
21
K backup node of the next hop of the publish
path, the nearest l neighbors of the current
hop.(1090 Nodes)
Relative delay penalty (RDP), Min distance
between the source and that endpoint
22
Insert a single node 20 times
23
Insert a single node 20 times
24
Delete a single node 20 times
25
MAX 1000 nodes
26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com