TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation - PowerPoint PPT Presentation

Loading...

PPT – TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation PowerPoint presentation | free to download - id: 66cb4d-OWJjZ



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation

Description:

TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation Bruce Gibbard Dimitrios Katramatos Shawn McKee Dantong Yu – PowerPoint PPT presentation

Number of Views:14
Avg rating:3.0/5.0
Slides: 23
Provided by: gridnetsO
Learn more at: http://www.gridnets.org
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation


1
TeraPaths End-to-End Network Path QoS
Configuration Using Cross-Domain Reservation
Negotiation
  • Bruce Gibbard
  • Dimitrios Katramatos
  • Shawn McKee
  • Dantong Yu
  • GridNets 2006

2
Outline
  • Introduction
  • The TeraPaths project
  • The TeraPaths system architecture
  • Experimental deployment and testing
  • Future work

3
Introduction
  • The problem support efficient/reliable/predictabl
    e peta-scale data movement in modern high-speed
    networks
  • Multiple data flows with different priorities
  • Default best effort network behavior can cause
    performance and service disruption problems
  • Solution enhance network functionality with QoS
    features to allow prioritization and protection
    of data flows

4
e.g. ATLAS Data Distribution
CERN
PBps
Online System
GBps
Tier 01
10-40 Gbps
ATLAS experiment
muon calibration
Tier 1
Tier 1 site
Tier 1 site
BNL
10 Gbps
Midwest site
UMich
SLAC
Tier 2
SW site
NE site
2.5-10 Gbps
Tier 3
Tier 3 site
Tier 3 site
Tier 3 site
100-1000 Mbps
Tier 4
Workstations
5
Combining LAN QoS with MPLS
  • End sites use the DiffServ architecture to
    prioritize data flows at the packet level.
    DiffServ supports
  • Per-packet QoS marking
  • IP precedence (62 classes of service)
  • DSCP (64 classes of service)
  • WAN(s) connecting end sites direct prioritized
    traffic through MPLS tunnels of requested
    bandwidth configured to preserve packet markings.
    MPLS/GMPLS
  • Uses RSVP-TE
  • Is QoS compatible
  • Supports Virtual tunnels, constraint-based
    routing, policy-based routing

6
Prioritized vs. Best Effort Traffic
7
The TeraPaths Project
  • The TeraPaths project investigates the
    integration and use of LAN QoS and
    MPLS/GMPLS-based differentiated network services
    in the ATLAS data intensive distributed computing
    environment in order to manage the network as a
    critical resource
  • DOE The collaboration includes BNL and the
    University of Michigan, as well as OSCARS
    (ESnet), Lambda Station (FNAL), and DWMI (SLAC)
  • NSF BNL participates in UltraLight to provide
    the network advances required in enabling
    petabyte-scale analysis of globally distributed
    data
  • NSF BNL participates in a new network
    initiative PLaNetS (Physics Lambda Network
    System ), led by CalTech

8
Automate MPLS/LAN QoS Setup
  • QoS reservation and network configuration system
    for data flows
  • Access to QoS reservations
  • Manually, through interactive web interface
  • From a program, through APIs
  • Compatible with a variety of networking
    components
  • Cooperation with WAN providers and remote LAN
    sites
  • Access Control and Accounting
  • System monitoring
  • Design goal enable the reservation of end-to-end
    network resources to assure a specified Quality
    of Service
  • User requests minimum bandwidth, start time, and
    duration
  • System either grants request or makes a counter
    offer
  • End-to-end network path is setup with one simple
    user request

9
End-to-End Configuration Models
10
Envisioned Overall Architecture
11
TeraPaths System Architecture
Web page APIs Cmd line
QoS requests
hardware drivers
hardware drivers
WAN chain
WAN chain
Site A (initiator)
Site B (remote)
12
TeraPaths Web Services
  • TeraPaths modules implemented as web services
  • Each network device (router/switch) is
    accessible/programmable from at least one
    management node
  • Site management node maintains databases
    (reservations, etc.) and distributes network
    programming by invoking web services on
    subordinate management nodes
  • Remote requests to/from other sites invoke via
    corresponding sites TeraPaths public web
    services layer
  • WAN services invoked through proxy servers
    (standardization of interface, dynamic
    pluggability, fault tolerance) which enables
    interoperability among different implementation
    of web services
  • Web services benefits
  • Standardized, reliable, and robust environment
  • Implemented in Java and completely portable
  • Accessible via web clients and/or APIs
  • Compatible and easily portable into Grid services
    and the Web Services Resource Framework (WSRF in
    GT4)

13
TeraPaths Web Services Architecture
14
Site Bandwidth Partitioning Scheme
15
Reservation Negotiation
  • Capabilities of site reservation systems
  • Yes/No vs. Counteroffer(s)
  • Direct commit vs. Temporary/Commit/Start
  • Algorithms
  • Serial vs.Parallel
  • Counteroffer processing vs. multiple trials
  • TeraPaths (current implementation)
  • Counteroffers and temporary/commit/start
  • Serial procedure (local site / remote site /
    WAN), limited iterations
  • User approval requested for counteroffers
  • WAN (OSCARS) is yes/no and direct commit

16
Initial Experimental Testbed
  • Full-featured LAN QoS simulation testbed using a
    private network environment
  • Two Cisco switches (same models as production
    hardware) interconnected with 1Gb link
  • Two managing nodes, one per switch
  • Four host nodes, two per switch
  • All nodes have dual 1Gb Ethernet ports, also
    connected to BNL campus network
  • Managing nodes run web services, database
    servers, have exclusive access to switches
  • Demo of prototype TeraPaths functionality given
    at SC05

17
Simulated (testbed) and Actual Traffic
Testbed demo competing iperf streams
BNL to Umich. 2 bbcp dtd xfers with iperf
background traffic through ESnet MPLS tunnel
18
New TeraPaths Testbed (end-to-end)
1st end-to-end fully automated route setup
BNL-ESnet-Umich on 8/17/06 _at_ 141pm EST
19
In Progress
  • Develop TeraPaths-aware tools (e.g., iperf, bbcp,
    gridftp, etc.)
  • Dynamically configure and partition QoS-enabled
    paths to meet time-constrained network
    requirements
  • Develop site-level network resource manager for
    multiple VOs vying for limited WAN resources
  • Integrate with software from other network
    projects OSCARS, lambda station, and DWMI
  • Collaborate on creating standards for the support
    of the daisy chain setup model

20
Future Work
  • Support dynamic bandwidth/routing adjustments
    based on resource usage policies and network
    monitoring data (provided by DWMI)
  • Extend MPLS within a sites LAN Backbone.
  • Further goal widen deployment of QoS
    capabilities to tier 1 and tier 2 sites and
    create services to be honored/adopted by CERN
    ATLAS/LHC tier 0

21
Route Planning with MPLS
(future capability)
22
BNL Site Infrastructure
About PowerShow.com