Receiverdriven Layered Multicast RLM - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Receiverdriven Layered Multicast RLM

Description:

current level of subscription too high or too low ... this spontaneous subscription to the next layer in the hierarchy is called join-experiment ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 15
Provided by: kala7
Category:

less

Transcript and Presenter's Notes

Title: Receiverdriven Layered Multicast RLM


1
Receiver-driven Layered Multicast(RLM)
  • Steven McCanne
  • Van Jacobson
  • Martin Vetterli

Presented by Krishna Bhimavarapu Thursday, 24
January 2002
2
Introduction
  • Source-based, rate-adaptive, multimedia
    applications adjust
  • their transmission rate to match available
    network capacity
  • This does not work well for heterogeneous
    multicast environment, like the Internet
  • Reasons
  • - no single target rate
  • - low-capacity regions of the network suffer
    congestion
  • - high-capacity regions are underutilized
  • Such problems are not just theoretical. They
    impact the daily use of Internet
    remote-conferencing
  • This paper addresses some questions related to
    layered multicast transport through the design
    and simulation of an experimental network
    protocol called Receiver-driven Layered Multicast
    (RLM)

3
  • Example U. C. Berkeleys broadcasting a seminar
    over the
  • campus network and the Internet
  • One approach is to combine a layered compression
    algorithm
  • with a layered transmission scheme
  • - selective forwarding only the number of
    layers that any link can manage
  • - but mechanisms to determine, communicate and
    execute this are
  • unavailable
  • - a novel mechanism based on IP Multicast was
    suggested by Deering
  • - the different layers of the hierarchical
    signal are stripped across multiple
    multicast groups

4
The Network Model
  • RLM works within the existing IP model
  • does not require any new
    machinery in the network
  • Assumptions
  • For single-source 1. Only best effort,
    multipoint packet delivery
  • 2. The delivery efficiency of IP Multicast
  • 3. Group-oriented communication
  • For multiple, simultaneous sources
  • Receivers can specify
    their group membership on a per-source basis
  • Session is a set of end-systems communicating via
    a common set of layered multicast groups
  • RLM is most easily conceptualized in a network
    where all groups follow
  • the same route but this not a requirement

5
RLM PROTOCOL
  • Source takes no active role
  • Protocol is run at the receiver, where adaptation
    is carried out by joining and leaving groups
  • - on congestion, drop a layer
  • - on spare capacity, add a layer
  • 1. Capacity Inference
  • - the receiver must determine its
  • current level of subscription too high or
    too low
  • - one way is to monitor link utilization and
    explicitly notify end- systems but this
    renders deployment difficult
  • - the approach in RLM is to to carry out active
    experiments by spontaneously adding layers at
    well chosen times
  • - this spontaneous subscription to the next
    layer in the hierarchy is called
    join-experiment

6
  • 2. RLM Adaptation
  • - join-experiment cause transient congestion
    that impact the quality of
  • the delivered signal
  • - so, minimize the frequency and duration of
  • experiment without affecting the algorithm
  • - done through a learning algorithm
  • - a separate join-timer for each level of
    subscription
  • and exponential back off to problematic layer
  • - detection-time time taken for a local layer
    change
  • to be fully established in the network and
    for the
  • resulting impact to be detected back at the
    receiver
  • 3. Scaling RLM
  • - the aggregate frequency of join-experiment
  • increases as session membership grows
  • - avoid problems by scaling down the individual
  • experiment rate independent of session size
  • (drawback is learning rate decreases)
  • - solution is shared learning

7
  • 4. RLM State Machine
  • - Tj is randomized to avoid protocol sync
    effects
  • - Td is set to a scaled value of the
    detection-time
  • estimator
  • - loss rate is measured with short term
    estimator and
  • action is taken if estimator exceeds
    threshold (LgtT)
  • - when the join-timer expires, a message is
    broadcasted
  • to the group and a layer is added, the
    receiver notes
  • the start of experiment
  • - the experiment is in progress if the time is
    less than k1TD k2sD
  • - when loss occurs in S state, resulting action
    depends on the
  • presence of join-experiment
  • - if a join-experiment has failed, then drop the
    offending layer, back
  • off the join-timer and enter D state
  • - if not certain then enter M state to look for
    longer term congestion
  • - if we are not conducting the experiment, then
    transition to H state

8
  • 5. Protocol State Maintenance
  • - detection-time estimator and join-timers must
    be
  • dynamically adapted to reflect changing
    network
  • conditions
  • - the join-timer is multiplicatively increased
    when the
  • join-experiment fails
  • TkJ lt- min (aTkJ , TmaxJ) a gt 1
  • - the join-timer undergoes relaxation in
    steady-state
  • - the longer a receiver is in steady-state at
    some level, the more
  • likely it is for that level to be stable
  • TkJ lt- max (ßTkJ , TminJ) ß lt1
  • - the detection-time reflects the latency
    between time when the local
  • action is carried out and the impact
    reflected back at the receiver
  • - each time the join-experiment fails, the
    detection-time estimator is
  • fed the new latency measurement
  • - measurement, Di, is passed through low-pass
    filter with gains g1,g2
  • sD lt- (1-g2) sD g2Di TD
  • TD lt- (1-g1) TD g1Di

9
Simulations
  • In a real network, performance is affected by
    cross-traffic and competing groups
  • RLM protocol was implemented on the LBNL network
    simulator, ns
  • Packets are generated at times
  • T0 0
  • Tk Tk-1 ? Nk, k gt 0
  • Nk is zero-mean noise process ? - fixed
    interval chosen to meet the target bit-rate
  • Variable parameters network topology, link
    bandwidths and latencies, number and rate
  • of transmission layers and placement
    of senders receivers
  • Fixed parameters the routing discipline,
    the router queue size, and packet size
  • In all the simulations link bandwidth is 1.5
    Mb/s, traffic sources are modeled as six layer
    CBR stream at rates 32x2m kb/s,m05, the start
    time of each receiver is randomly chosen
    uniformly on the interval 30,120 seconds

10
  • 1. Evaluation Metrics
  • Two metrics that reflect the quality of
    real-time loss-tolerant stream
  • - worst-case loss rate over varying
    time scales
  • - throughput
  • 2. Experiments
  • - protocols delay scalability
  • - scalability with respect to
  • session size
  • - Performance in the presence of bandwidth
    heterogeneity
  • - superposition of a large number of independent
    sessions
  • 3. Results
  • Latency Scalability
  • - duration of join-experiment is roughly twice
    the
  • link latency plus the queue build-up time
  • - varied link delay from 1ms to 20sec and
    computed
  • worst-case loss rate to explore delay
    sensitivity
  • - slide a measurement window (1,10,100 sec)
    over the packet process

11
  • Session Scalability
  • - plotted the maximum loss rate against the
    number of
  • receivers for each time scale
  • - compute the maximum loss rate by taking the
    maximum
  • across the worst-case loss rates of each
    receiver
  • - worst-case loss rates are independent of
    session size
  • Bandwidth Heterogeneity
  • - works well even in the presence of large set
    of
  • receivers with different bandwidth
    constraints
  • - worst-case loss rates are comparable
  • - short-term congestion periods can last
    longer at larger
  • session sizes, impact is reduced by
    limiting detection time estimator
  • Superposition
  • - bottleneck bandwidth was scaled in
    proportion to the
  • number of pairs and the router queue
    limit scaled
  • to twice the bandwidth-delay product
  • - converged to an aggregate link utilization
    close to one

12
Network Implications
  • Implications of RLM on other components in a
    comprehensive system
  • 1. Receiver-consensus
  • - an important requirement is all users must
    cooperate
  • 2. Group Maintenance
  • - performance of RLM depends on the join/leave
    latencies
  • - when a receiver joins a new group, the host
    informs the next-hop router,
  • which propagates a graft message in order to
    instantiate the new group
  • - the leave-case is more complicated when a
    host drops a group, it broadcasts
  • a leave group message when no members
    remain, the router sends a prune
  • message to suppress the group
  • 3. Fairness
  • - an aggregate performance metric depends on how
    group fairness is
  • defined
  • - RLM alone does not provide fairness if
    machinery for fairness is added to
  • the network, RLM should work efficiently

13
The Application
  • A layered source coder is developed to complement
    the layered transmission system goal being to
    design, build and evaluate all components that
    contribute to a scalable video transmission
    system
  • Two key features of the layered coder are
    resilience to packet loss and low complexity
  • The RLM and layered codec are being implemented
    in the UCB/LBNL video conferencing tool, vic

Future Work
  • Needs a better evaluation metric - level of
    quality perceived by the user
  • Experiment with algorithms that dynamically
    adjust the bit-rate allocation of the different
    compression layers
  • Interactions among multiple RLM sessions in the
    context of different scheduling algorithms
  • Explore interactions with other
    bandwidth-adaptive protocols like TCP
  • Improve modeling and analysis of the problem

14
SUMMARY
  • The paper proposed a framework for the
    transmission of layered signals over
    heterogeneous networks using receiver-driven
    adaptation
  • Evaluated the performance of RLM through
    simulation and showed that it exhibits reasonable
    loss and convergence rates under various scaling
    scenarios
  • Focused on complete systems design
  • Described the work on a low-complexity,
    error-resilient layered source coder. This when
    combined with RLM provides a comprehensive
    solution for scalable multicast video
    transmission in heterogeneous networks
Write a Comment
User Comments (0)
About PowerShow.com