Towards A Common API for Structured PeertoPeer Overlays - PowerPoint PPT Presentation

About This Presentation
Title:

Towards A Common API for Structured PeertoPeer Overlays

Description:

Publish(key, object): ensures that the locally stored object can be located using the key. ... may be published under the same key from different locations. ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 32
Provided by: csU45
Category:

less

Transcript and Presenter's Notes

Title: Towards A Common API for Structured PeertoPeer Overlays


1
Towards A Common API for Structured Peer-to-Peer
Overlays
  • Frank Dabek, Ben Y. Zhao,Peter Druschel, John
    Kubiatowicz, Ion Stoica
  • MIT, U. C. Berkeley, and Rice
  • Conducted under the IRIS project (NSF)

2
State of the Art
  • Lots and lots of peer to peer applications
  • Decentralized file systems, archival backup
  • Group communication / coordination
  • Routing layers for anonymity, attack resilience
  • Scalable content distribution
  • Built on scalable, self-organizing overlays
  • E.g. CAN, Chord, Pastry, Tapestry, Kademlia, etc
  • Semantic differences
  • Store/get data, locate objects, multicast /
    anycast
  • How do these functional layers relate?
  • What is the smallest common denominator?

3
Some Abstractions
  • Distributed Hash Tables (DHT)
  • Simple store and retrieve of values with a key
  • Values can be of any type
  • Decentralized Object Location and Routing (DOLR)
  • Decentralized directory service for
    endpoints/objects
  • Route messages to nearest available endpoint
  • Multicast / Anycast (CAST)
  • Scalable group communication
  • Decentralized membership management

4
Tier 1 Interfaces
5
Structured P2P Overlays
Tier 2
CFS
PAST
SplitStream
i3
OceanStore
Bayeux
Tier 1
CAST
DHT
DOLR
Key-based Routing
Tier 0
6
The Common Denominator
  • Key-based Routing layer (Tier 0)
  • Large sparse Id space N(160 bits 0 2160
    represented as base b)
  • Nodes in overlay network have nodeIds ? N
  • Given k ? N, overlay deterministically maps k to
    its root node (a live node in the network)
  • Goal Standardize API at this layer
  • Main routing call
  • route (key, msg, node)
  • Route message to node currently responsible for
    key
  • Supplementary calls
  • Flexible upcall interface for customized routing
  • Accessing and managing the ID-space

7
Flexible Routing via Upcalls
  • Deliver(key, msg)
  • Delivers a message to application at the
    destination
  • Forward(key, msg, nextHopNode)
  • Synchronous upcall with normal next hop node
  • Applications can override messages
  • Update(node, boolean joined)
  • Upcall invoked to inform application of a node
    joining or leaving the local nodes neighborSet

8
KBR API (managing ID space)
  • Expose local routing table
  • nextHopSet local_lookup (key, num, safe)
  • Query the ID space
  • nodehandle neighborSet (max_rank)
  • nodehandle replicaSet (key, num)
  • boolean range (node, rank, lkey, rkey)

9
Caching DHT Illustrated
10
Caching DHT Implementation
  • Interface
  • put (key, value)
  • value get (key)
  • Implementation (source S, client C, root R)
  • Put route(key, PUT,value,S)Forward upcall
    store valueDeliver upcall store value
  • Get route(key, GET,C)Forward upcall if
    cached, route(C, value), FINDeliver upcall if
    found, route(C, value)

11
Ongoing Work
  • Whats next
  • Better understanding of DOLR vs. CAST
  • Decentralized endpoint management
  • Policies in placement of indirection points
  • APIs and semantics for Tier 1 (DHT/DOLR/CAST)
  • KBR API implementation in current protocols
  • See paper for additional details
  • Implementation of Tier 1 interfaces on KBR
  • KBR API support on selected P2P systems

12
Backup Slides Follow
13
Our Goals
  • Protocol comparison
  • Identify basic commonalities between systems
  • Isolate and clarify interfaces by functionality
  • Towards a common API
  • Easily supportable by old and new protocols
  • Enable application portability between protocols
  • Enable common benchmarks
  • Provide a framework for reusable components

14
Key-based Routing API
  • Invoking routing functionality
  • Route (key, msg, node)
  • Accessing routing layer
  • nextHopSet local_lookup(key, num, safe)
  • nodehandle neighborSet(max_rank)
  • nodehandle replicaSet(key, num)
  • boolean range(node, rank, lkey, rkey)
  • Flexible upcalls
  • Delivery (key, msg)
  • Forward (key, msg, nextHopNode)
  • Update (node, boolean joined)

15
Observations
  • Compare and contrast
  • Issues locality, naming, caching, replication
  • Common general algorithmic approach
  • Contrast instantiation, policy,
  • Revising abstraction definitions
  • Pure abstraction vs. instantiated prototype
  • E.g. DHT abstraction vs. DHash
  • Abstractions colored by initial application
  • E.g. Object location ? DOLR ? Endpoint Location
    and Routing
  • Ongoing understanding of Tier 1 interfaces

16
Flexible API for Routing
  • Goal
  • Consistent API for leveraging routing mesh
  • Flexible enough to build higher abstractions
  • Openness promotes new abstractions
  • Allow competitive selection to determine right
    abstractions
  • Three main components
  • Invoking routing functionality
  • Accessing namespace mapping properties
  • Open, flexible upcall interface

17
DOLR on Routing API
18
DOLR Implementation
  • Endpoint E, client C, key K
  • Publish route(objectId, publish, K,
    E)Forward upcall
  • store (K,E) in local storage
  • sendToObj route(nodeId, n, msg)Forward
    upcall
  • e Lookup (K) in local storage
  • For (n ? e), route (ni, msg)
  • If (n gt e), route (nodeId, n - e, msg)

19
Storage API Overview
  • linsert(key, value)
  • value lget(key)

20
Storage API
  • linsert(key, value) store the tuple ltkey, valuegt
    into local storage. If a tuple with key already
    exists, it is replaced. The insertion is atomic
    wrt to failures of the local node.
  • value lget(key) retrieves the value associated
    with key from local storage. Returns null if no
    tuple with key exists.

21
Basic DHT API Overview
  • insert(key, value, lease)
  • value get(key)
  • release(key)
  • Upcalls
  • insertData(key, value, lease)

22
Basic DHT API
  • insert(key, value, lease) inserts the tuple
    ltkey, valuegt into the DHT. The tuple is
    guaranteed to be stored in the DHT only for
    lease time. value also includes the type of
    operations to be performed on insertion. Default
    operation types include
  • REPLACE replace value associated with the same
    key
  • APPEND append value to the existing key
  • UPCALL generate an upcall to application before
    inserting

23
Basic DHT API
  • value get(key) retrieves the value associated
    with key. Returns null if no tuple with key
    exists in the DHT.

24
Basic DHT API
  • ltkey, valuegt scan(searchKey) let n denote the
    node responsible for searchKey. Return the tuple
    stored at n whose key follows searchKey. If no
    such a key exists, returns the first key that
    follows searchKey and which is not mapped on n.
  • Notes
  • If there is a tuple whose key is equal to
    searchKey, then that tuple is returned
    (degenerates to get() operation)
  • This function assumes a total order amongst the
    keys.
  • This function can be used to implement global
    scan in DHT

25
Basic DHT API
  • Release(key) releases any tuples with key from
    the DHT. After this operations completes, tuples
    with key are no longer guaranteed to exist in the
    DHT.

26
Basic DHT API Open questions
  • Semantics?
  • Verification/Access control/multiple DHTs?
  • Caching?
  • Replication?
  • Should we have leases? It makes us dependent on
    secure time sync.

27
Replicating DHT API
  • Insert(key, value, numReplicas) adds a
    numReplicas argument to insert. Ensures
    resilience of the tuple to up to numReplicas-1
    simultaneous node failures.
  • Open questions
  • consistency

28
Caching DHT API
  • Same as basic DHT API. Implementation uses
    dynamic caching to balance query load.

29
Resilient DHT API
  • Same as replicating DHT API. Implementation uses
    dynamic caching to balance query load.

30
Publish API Overview
  • Publish(key, object)
  • object Lookup(key)
  • Remove(key, object)

31
Publish API
  • Publish(key, object) ensures that the locally
    stored object can be located using the key.
    Multiple instances of the object may be published
    under the same key from different locations.
  • object Lookup(key) locates the nearest
    instance of the object associated with key.
    Returns null if no such object exists.

32
Publish API
  • Remove(key, object) after this operation
    completes, the local instance of object can no
    longer be located using key.
Write a Comment
User Comments (0)
About PowerShow.com