Building Peer-to-Peer Systems with Chord, a Distributed Lookup Service - PowerPoint PPT Presentation

About This Presentation
Title:

Building Peer-to-Peer Systems with Chord, a Distributed Lookup Service

Description:

Building Peer-to-Peer Systems with Chord, a Distributed Lookup Service Robert Morris F. Dabek, E. Brunskill, M. F. Kaashoek, D. Karger, I. Stoica, H. Balakrishnan – PowerPoint PPT presentation

Number of Views:92
Avg rating:3.0/5.0
Slides: 22
Provided by: robertm173
Category:

less

Transcript and Presenter's Notes

Title: Building Peer-to-Peer Systems with Chord, a Distributed Lookup Service


1
Building Peer-to-Peer Systems withChord, a
Distributed Lookup Service
  • Robert Morris
  • F. Dabek, E. Brunskill, M. F. Kaashoek,
  • D. Karger, I. Stoica, H. Balakrishnan
  • MIT / LCS

2
Goal Better Peer-to-Peer Storage
  • Lookup is the key problem
  • Lookup is not easy
  • GNUtella scales badly
  • Freenet is imprecise
  • Chord lookup provides
  • Good naming semantics and efficiency
  • Elegant base for layered features

3
Lookup
N1
N2
?
Consumer
N3
Fetch(name)
N4
Author
Insert(name, document)
N6
N5
4
Chord Architecture
  • Interface
  • lookup(DocumentID) ? NodeID, IP-Address
  • Chord consists of
  • Consistent Hashing
  • Small routing tables log(n)
  • Fast join/leave protocol

5
Consistent Hashing
(0)
D120
N105
D20
Circular 7-bit ID space
N32
N90
D80
Example Node 90 is the successor of document
80.
6
Chord Uses log(N) Fingers
(0)
½
¼
Circular 7-bit ID space
1/8
1/16
1/32
1/64
1/128
N80
N80 knows of only seven other nodes.
7
Chord Finger Table
N32s Finger Table
(0)
N113
33..33 N40 34..35 N40 36..39 N40 40..47 N40 48..63
N52 64..95 N70 96..31 N102
N102
N32
N85
N40
N80
N79
N52
N70
N60
Node ns i-th entry first node ? n 2i-1
8
Chord Lookup
N70s Finger Table
71..71 N79 72..73 N79 74..77 N79 78..85 N80 86..10
1 N102 102..5 N102 6..69 N32
(0)
N113
N32s Finger Table
N102
33..33 N40 34..35 N40 36..39 N40 40..47 N40 48..63
N52 64..95 N70 96..31 N102
N80s Finger Table
N32
N85
81..81 N85 82..83 N85 84..87 N85 88..95 N102 96..1
11 N102 112..15 N113 16..79 N32
N40
N80
N52
N79
N60
N70
Node 32, lookup(82) 32 ? 70 ? 80 ? 85.
9
New Node Join Procedure
N20s Finger Table
(0)
N113
N20
21..21 22..23 24..27 28..35 36..51 52..83 84
..19
N102
N32
N40
N80
N52
N60
N70
10
New Node Join Procedure (2)
N20s Finger Table
(0)
N113
N20
21..21 N32 22..23 N32 24..27 N32 28..35 N32 36..5
1 N40 52..83 N52 84..19 N102
N102
N32
N40
N80
N52
N60
N70
Node 20 asks any node for successor to 21, 22, ,
52, 84.
11
New Node Join Procedure (3)
N20s Finger Table
(0)
N113
N20
21..21 N32 22..23 N32 24..27 N32 28..35 N32 36..5
1 N40 52..83 N52 84..19 N102
N102
D114..20
N32
N40
N80
N52
N60
N70
Node 20 moves documents from node 32.
12
Chord Properties
  • Log(n) lookup messages and table space.
  • Well-defined location for each ID.
  • No search required.
  • Natural load balance.
  • No name structure imposed.
  • Minimal join/leave disruption.
  • Does not store documents

13
Building Systems with Chord
Client App (e.g. Browser)
Data auth.
get(key) put(k, v)
Storage Update auth. Fault tolerance Load balance
Key/Value
Key/Value
Key/Value
lookup(id)
Chord
Chord
Chord
Server
Client
Server
14
Naming and Authentication
  • Name could be hash of file content
  • Easy for client to verify
  • But update requires new file name
  • Name could be a public key
  • Document contains digital signature
  • Allows verified updates w/ same name

15
Naming and Fault Tolerance
Replica1
Replica2
Replica2
Replica3
Replica1
Replica3
IDi hash(name i)
IDi successori(hash(name))
16
Naming and Load Balance
File Blocks
Inode
IDB1 hash(B1)
B1
IDB1 IDB2 IDB3
B2
IDB2 hash(B2)
IDInode hash(name)
B3
IDB2 hash(B2)
17
Naming and Caching
Client 1
D30 _at_
N32
Client 2
18
Open Issues
  • Network proximity
  • Malicious data insertion
  • Malicious Chord table information
  • Anonymity
  • Keyword search and indexing

19
Related Work
  • CAN (Satnasamy, Francis, Handley, Karp, Shenker)
  • Pastry (Rowstron and Druschel)
  • Tapestry (Zhao, Kubiatowicz, Joseph)

20
Chord Status
  • Working Chord implementation
  • SFSRO file system layered on top
  • Prototype deployed at 12 sites around world
  • Understand design tradeoffs

21
Chord Summary
  • Chord provides distributed lookup
  • Efficient, low-impact join and leave
  • Flat key space allows flexible extensions
  • Good foundation for peer-to-peer systems
  • http//www.pdos.lcs.mit.edu/chord
Write a Comment
User Comments (0)
About PowerShow.com