Broadcast Federation - PowerPoint PPT Presentation

1 / 9
About This Presentation
Title:

Broadcast Federation

Description:

Commodity hardware. Easier customizability and deployability. Inefficient hardware. Source ... TRANSLATION messages contain IP addresses of data nodes in the cluster. ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 10
Provided by: sprin96
Category:

less

Transcript and Presenter's Notes

Title: Broadcast Federation


1
Broadcast Federation
  • An architecture for scalable
  • inter-domain multicast/broadcast.

Mukund Seshadri mukunds_at_cs.berkeley.edu
with
Yatin Chawathe yatin_at_research.att.com
Fall 2002
http//www.cs.berkeley.edu/mukunds/ms/ms-report.p
s.gz
2
Motivation
Goal
  • Design an architecture for the composition of
    different, non-interoperable multicast/broadcast
    domains to provide an end-to-end one-to-many
    packet delivery service.
  • Design and implement a high performance
    (clustered) broadcast gateway for the above
    architecture
  • No universally deployed multicast protocol.
  • e.g. IP Multicast, SSM, Overlays, CDNs
  • Typical problems
  • Address-space scarcity (in IP Multicast)
  • Limited scalability
  • e.g. IP Multicast involves some form of flooding
  • Need for administrative boundaries.

3
Architecture
Source
  • Broadcast Network (BN) any multicast capable
    network/domain/CDN)
  • Broadcast Gateway (BG)
  • Bridges between 2 BNs
  • Explicit BG peering
  • Overlay of BGs
  • Analogous to BGP routers.
  • App-level
  • For both app-layer and IP-layer protocols
  • Less efficient link usage, and more delay
  • Commodity hardware
  • Easier customizability and deployability
  • Inefficient hardware

Clients
BG
BN
Peering
Data
4
Naming
  • A session is associated with a unique Owner BN
  • For shared tree protocols
  • Address space limited only by individual BNs
    naming protocols.
  • URL style- bin//Owner_BN/native_session_name/flow
    _1_name/flow_2_name/?pmtr1value1...
  • native_session_name - specific to the owner BN
  • flow(s) related flows grouped under 1 session.
  • pmtr(s) metrics, transport, single-source/multi-
    source.

Routing
  • Path vector algorithm to propagate BN/BG
    reachability info
  • Scope for forwarding policy hooks
  • Different routes for different metrics/options
  • e.g. BN-hop-count best-effortmulti-source,
    latencyreliable, etc.
  • Session-agnostic, in order to avoid all BNs
    knowing about all sessions.

5
Distribution Trees
  • One reverse shortest path-tree per session
  • Tree rooted at owner BN.
  • Soft BG tree state(Session Upstream node
    list of downstream nodes)
  • Bi-directional.
  • Fine-grained selectivity using SROUTE messages
    before JOIN phase.
  • Built using JOIN messages, like PIM-SM.

B-Fed Mediator (an abstraction)
  • Client in owner BN ? no interaction with the
    federation.
  • Client not in owner BN ? needs to send JOIN to a
    BG in its BN
  • So, BNs are required to implement the Mediator
    abstraction, for sending JOINs for sessions to
    BGs.
  • e.g. a mediator IP multicast group
  • e.g. routers or other BN-specific aggregators

6
Data Forwarding
  • Decouples control from data
  • i.e. control nodes from data nodes.
  • TRANSLATION messages carry data path addresses
    per session
  • e.g. TCP/UDP/IP Multicast addressport.
  • e.g. a transit SSM network might require 2
    channels to be setup for one session.

NativeCast (interface)
  • Encapsulates all BN-specific customization
  • Interface to local broadcast capability
  • Send and Receive broadcast data
  • Allocate and reclaim local broadcast addresses
  • Subscribe to and unsubscribe from local broadcast
    sessions
  • Implement Mediator functionality intercept
    and reply to local JOINs
  • Get SROUTE values.

7
Clustered BG design
  • 1 Control noden Data nodes.
  • Control node ?routing tree-building.
  • Independent data paths ? flow directly through
    data nodes.
  • TRANSLATION messages contain IP addresses of data
    nodes in the cluster.
  • Throughput bottlenecked only by the IP
    router/NIC.
  • Soft data-forwarding state at data nodes.

8
Implementation
  • Linux/C event-driven program
  • NativeCast for IP Multicast, simple HTTP-based
    CDN and SSM 700 lines of code each.
  • Setup
  • No. of sources no. of sinks no. of Dnodes.
    (so that sources/sinks dont become bottleneck).
  • Each machine capable of driving a 350Mbps
    throughput TCP stream.
  • 500MHz PIIIs 1 Gbps NICs.

Note maximum (TCP-based) throughput achievable
using different data message (framing) sizes is
shown above.
Variation of 1-Dnode throughput when no.
of sessions is increased to several sessions is
shown. The sources are rate-unlimited.
9
Summary
  • 1 Gbps B/W with 5 Dnodes
  • 2500 sessions with 5 Dnodes
  • Throughput reduces with high number of sessions
  • Higher framing sizes yield higher throughput.

Variation of 1-Dnode throughput when the frame
size is varied is shown. The sources are
rate-unlimited.
Future Work
  • Use better I/O interfaces for higher throughput
    with large number of file descriptors (kernel
    patch or OS change required).
  • What if a client has access to multiple BNs?
  • Wide area deployment?
Write a Comment
User Comments (0)
About PowerShow.com