A Case For End System Multicast - PowerPoint PPT Presentation

About This Presentation
Title:

A Case For End System Multicast

Description:

Members periodically probe other members at random. New Link added if ... Members periodically monitor existing links. Existing Link dropped if ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 28
Provided by: san7216
Category:

less

Transcript and Presenter's Notes

Title: A Case For End System Multicast


1
A Case For End System Multicast
  • Yang-hua Chu, Sanjay Rao and Hui Zhang
  • Carnegie Mellon University
  • Slides modified and presented by Jonathan
    Shapiro

2
Talk Outline
  • Definition, potential benefits problems
  • Look at Narada approach specifically
  • Performance in simulation
  • Performance in network experiment
  • Related work
  • Discussion

3
IP Multicast
Gatech
Stanford

CMU
Berkeley
  • No duplicate packets
  • Highly efficient bandwidth usage
  • Key Architectural Decision Add support for
    multicast in IP layer

4
Key Concerns with IP Multicast
  • Scalability with number of groups
  • Routers maintain per-group state
  • Analogous to per-flow state for QoS guarantees
  • Aggregation of multicast addresses is complicated
  • Supporting higher level functionality is
    difficult
  • IP Multicast best-effort multi-point delivery
    service
  • End systems responsible for handling higher level
    functionality
  • Reliability and congestion control for IP
    Multicast complicated
  • Inter-domain routing is hard.
  • No management of flat address space.
  • Deployment is difficult and slow
  • ISPs reluctant to turn on IP Multicast


5
End System Multicast
CMU
Stan1
Gatech
Stanford
Stan2

Berk1

Berkeley
Berk2
Overlay Tree
Stan1
Gatech

Stan2
CMU
Berk1
Berk2
6
Potential Benefits
  • Scalability (number of sessions in the network)
  • Routers do not maintain per-group state
  • End systems do, but they participate in very few
    groups
  • Easier to deploy
  • Potentially simplifies support for higher level
    functionality
  • Leverage computation and storage of end systems
  • For example, for buffering packets, transcoding,
    ACK aggregation
  • Leverage solutions for unicast congestion control
    and reliability


7
Performance Concerns
8
What is an efficient overlay tree?
  • The delay between the source and receivers is
    small
  • Ideally,
  • The number of redundant packets on any physical
    link is low
  • Heuristic we use
  • Every member in the tree has a small degree
  • Degree chosen to reflect bandwidth of connection
    to Internet

CMU
CMU
CMU
Stan2
Stan2
Stan2
Stan1
Stan1
Stan1
Gatech
Gatech
Berk1
Berk1
Berk1
Gatech
Berk2
Berk2
Berk2
Efficient overlay
High degree (unicast)
High latency
9
Why is self-organization hard?
  • Dynamic changes in group membership
  • Members join and leave dynamically
  • Members may die
  • Limited knowledge of network conditions
  • Members do not know delay to each other when they
    join
  • Members probe each other to learn network related
    information
  • Overlay must self-improve as more information
    available
  • Dynamic changes in network conditions
  • Delay between members may vary over time due to
    congestion

10
Narada Design (1)
  • Maintain a complete overlay graph of all group
    members
  • Links correspond to unicast paths
  • Link costs maintained by polling

Step 0
  • Mesh Subset of complete graph may have cycles
    and includes all group members
  • Members have low degrees
  • Shortest path delay between any pair of members
    along mesh is small

Step 1
11
Narada Design (2)
  • Source rooted shortest delay spanning trees of
    mesh
  • Constructed using well known routing algorithms
  • Members have low degrees
  • Small delay from source to receivers

Step 2
12
Narada Components
  • Mesh Management
  • Ensures mesh remains connected in face of
    membership changes
  • Mesh Optimization
  • Distributed heuristics for ensuring shortest path
    delay between members along the mesh is small
  • Spanning tree construction
  • Routing algorithms for constructing data-delivery
    trees
  • Distance vector routing, and reverse path
    forwarding

13
Optimizing Mesh Quality
CMU
  • Members periodically probe other members at
    random
  • New Link added if
  • Utility Gain of adding link gt Add Threshold
  • Members periodically monitor existing links
  • Existing Link dropped if
  • Cost of dropping link lt Drop Threshold

Stan2
Stan1
A poor overlay topology
Gatech1
Berk1
Gatech2

14
The terms defined
  • Utility gain of adding a link based on
  • The number of members to which routing delay
    improves
  • How significant the improvement in delay to each
    member is
  • Cost of dropping a link based on
  • The number of members to which routing delay
    increases, for either neighbor
  • Add/Drop Thresholds are functions of
  • Members estimation of group size
  • Current and maximum degree of member in the
    mesh

15
Desirable properties of heuristics
  • Stability A dropped link will not be immediately
    readded
  • Partition Avoidance A partition of the mesh is
    unlikely to be caused as a result of any single
    link being dropped

CMU
CMU
Stan2
Stan2
Stan1
Stan1
Probe
Gatech1
Gatech1
Berk1
Berk1
Probe
Gatech2
Gatech2
Delay improves to Stan1, CMU but marginally. Do
not add link!
Delay improves to CMU, Gatech1 and
significantly. Add link!
16
CMU
Stan2
Stan1
Berk1
Gatech1
Gatech2
Used by Berk1 to reach only Gatech2 and vice
versa. Drop!!
CMU
Stan2
Stan1
Berk1
Gatech1
Gatech2
An improved mesh !!
17
Narada Evaluation
  • Simulation experiments
  • Evaluation of an implementation on the Internet

18
Performance Metrics
  • Delay between members using Narada
  • Stress, defined as the number of identical copies
    of a packet that traverse a physical link


Gatech
Berk2
19
Factors affecting performance
  • Topology Model
  • Waxman Variant
  • Mapnet Connectivity modeled after several ISP
    backbones
  • ASMap Based on inter-domain Internet
    connectivity
  • Topology Size
  • Between 64 and 1024 routers
  • Group Size
  • Between 16 and 256
  • Fanout range
  • Number of neighbors each member tries to maintain
    in the mesh

20
Delay in typical run

Waxman 1024 routers, 3145 links Group Size
128 Fanout Range lt3-6gt for all members
21
Stress in typical run
Native Multicast
Narada 14-fold reduction in
worst-case stress !

Naive Unicast

22
Overhead
  • Two sources
  • pairwise exchange of routing and control
    information
  • polling for mesh maintenance.
  • Claim Ratio of non-data to data traffic grows
    linearly with group size.
  • Narada is targeted at small groups.

23
Related Work
  • Yoid (Paul Francis, ACIRI)
  • More emphasis on architectural aspects, less on
    performance
  • Uses a shared tree among participating members
  • More susceptible to a central point of failure
  • Distributed heuristics for managing and
    optimizing a tree are more complicated as cycles
    must be avoided
  • Scattercast (Chawathe et al, UC Berkeley)
  • Emphasis on infrastructural support and
    proxy-based multicast
  • To us, an end system includes the notion of
    proxies
  • Also uses a mesh, but differences in protocol
    details

24
Conclusions
  • Proposed in 1989, IP Multicast is not yet widely
    deployed
  • Per-group state, control state complexity and
    scaling concerns
  • Difficult to support higher layer functionality
  • Difficult to deploy, and get ISPs to turn on IP
    Multicast
  • Is IP the right layer for supporting multicast
    functionality?
  • For small-sized groups, an end-system overlay
    approach
  • is feasible
  • has a low performance penalty compared to IP
    Multicast
  • has the potential to simplify support for higher
    layer functionality
  • allows for application-specific customizations

25
Open Questions
  • Theoretical bounds on how close an ESM tree can
    come to IP multicast performance.
  • Alternate approach Work with complete graph but
    modify multicast routing protocol.
  • Leveraging unicast reliability and congestion
    contol.
  • Performance improvements Reduce polling overhead.

26
Internet Evaluation
  • 13 hosts, all join the group at about the same
    time
  • No further change in group membership
  • Each member tries to maintain 2-4 neighbors in
    the mesh
  • Host at CMU designated source

UWisc
UMass
14
10
UIUC2
CMU1
10
1
UIUC1
1
11
CMU2
31
UDel
38
Berkeley
Virginia1
UKY
15
1
8
Virginia2
UCSB
13
GATech
27
Narada Delay Vs. Unicast Delay
2x unicast delay
1x unicast delay
(ms)
Internet Routing can be sub-optimal
(ms)
Write a Comment
User Comments (0)
About PowerShow.com