GGF3 Frascati - PowerPoint PPT Presentation

About This Presentation
Title:

GGF3 Frascati

Description:

(Qwest NBC PoP) 9/24/09. The EU DataTAG Project. Olivier H. Martin (6 ) Workplan (1) ... A four site network in Chicago -- the first 10GE service trial! ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 18
Provided by: Olivier137
Category:
Tags: chicago | frascati | ggf3 | nbc

less

Transcript and Presenter's Notes

Title: GGF3 Frascati


1
The EU DataTAG Project
Presented at the Grid Workshop LISHEP conference
7th February Rio de Janeiro, Brazil Olivier H.
Martin CERN - IT Division
2
The project
  • Two main focus
  • Grid related network research
  • Interoperability between European and US Grids
  • 2.5 Gbps transatlantic lambda between CERN
    (Geneva) and StarLight (Chicago)
  • dedicated to research (no production traffic)
  • Expected outcomes
  • Hide complexity of Wide Area Networking, i.e. the
    network becomes truly transparent.
  • Better interoperability between GRID projects in
    Europe and North America
  • DataGrid, possibly other EU funded Grid projects
  • PPDG, GriPhyN, DTF, iVDGL (USA)

3
The project (cont)
  • European partners INFN (IT), PPARC (UK),
    University of Amsterdam (NL) and CERN, as project
    coordinator. INRIA (FR) ESA (European Space
    Agency) will join soon.
  • Significant contributions to the DataTAG workplan
    have been made by Jason Leigh (EVL_at_University of
    Illinois), Joel Mambretti (Northwestern
    University), Brian Tierney (LBNL).
  • Strong collaborations already in place with ANL,
    Caltech, FNAL, SLAC, University of Michigan, as
    well as Internet2 and ESnet.
  • Budget 3.98 MEUR
  • Start date January, 1, 2002 - Duration 2 years
  • Funded manpower 15 persons/year
  • NSF support through the existing collaborative
    agreement with CERN (Eurolink award)

4
DataTAG project
NewYork
Abilene
STAR-LIGHT
ESNET
CERN
MREN
STAR-TAP
5
DataTAG planned set up(second half 2002)
DataTAGtest equipment
CERN PoP Chicago
DataTAGtest equipment
STARLIGHT
DataTAGtest equipment
UvA
INFN
PPARC
...
(Qwest NBC PoP)
ESNET
DataTAGtest equipment
GEANT
ABILENE
2.5 Gb
DataGRID
PPDG
iVDGL
CERN
GriPhyN
DTF
CIXP
DataTAG test equipment
6
Workplan (1)
  • WP1 Provisioning Operations (P. Moroni/CERN)
  • Will be done in cooperation with DANTE
  • Two major issues
  • Procurement
  • Routing, how can the DataTAG partners have
    transparent access to the DataTAG circuit across
    GEANT and their national network?
  • WP5 Information dissemination and exploitation
    (CERN)
  • WP6 Project management (CERN)

7
Workplan (2)
  • WP2 High Performance Networking (Robin
    Tasker/PPARC)
  • High performance Transport
  • tcp/ip performance over large bandwidthdelay
    networks
  • Alternative transport solutions
  • End to end inter-domain QoS
  • Advance network resource reservation

8
Workplan (3)
  • WP3 Bulk Data Transfer Application performance
    monitoring (Cees deLaat/UvA)
  • Performance validation
  • End to end user performance
  • Validation
  • Monitoring
  • Optimization
  • Application performance
  • Netlogger

9
Workplan (4)
  • WP4 Interoperability between Grid Domains
    (Antonia Ghiselli/INFN)
  • GRID resource discovery
  • Access policies, authorization security
  • Identify major problems
  • Develop inter-Grid mechanisms able to
    interoperate with domain specific rules
  • Interworking between domain specific Grid
    services
  • Test Applications
  • Interoperability, performance scalability issues

10
Planning details
  • The lambda availability is expected in the second
    half of 2002
  • Initially, test systems will be either at CERN or
    connect via GEANT
  • GEANT is expected to provide VPNs (or equivalent)
    for Datagrid and/or access to the GEANT PoPs.
  • Later, it is hoped that dedicated lambdas for
    Datagrid will be made available through GEANT or
    other initiatives (e.g. Flag Telecom)
  • Initially, a 2.5 Gbps POS link
  • WDM later, depending on equipment availability

11
The STAR LIGHT
  • Next generation STAR TAP with the following main
    distinguishing features
  • Neutral location (Northwestern University)
  • 1/10 Gigabit Ethernet based
  • Multiple local loop providers
  • Optical switches for advanced experiments
  • The STAR LIGHT will provide 2622 Mbps ATM
    connection to the STAR TAP
  • Started in July 2001
  • Also hosting other advanced networking projects
    in Chicago State of Illinois
  • N.B. Most European Internet Exchanges Points have
    already been implemented along the same lines.

12
StarLight Infrastructure
  • Soon, Star Light will be an optical switching
    facility for wavelengths

University of Illinois at Chicago
13
Evolving StarLightOptical Network Connections
Asia-Pacific
SURFnet, CERN
Vancouver
CAnet4
CAnet4
Seattle
Portland
U Wisconsin
NYC
Chicago
PSC
San Francisco
IU
DTF 40Gb
NCSA
Asia-Pacific
Caltech
Atlanta
SDSC
AMPATH
ANL, UIC, NU, UC, IIT, MREN
14
OMNInet Technology Trial _at_ StarLight
Evanston
West Taylor
GE
GE
Optical Switching Platform
Optical Switching Platform
Passport 8600
Passport 8600
Application Cluster
Application Cluster
OPTera Metro 5200
OPTera Metro 5200
S. Federal
Lakeshore
To CaNet4 (future)
10GbE WAN
GE
GE
Optical Switching Platform
Passport 8600
Application Cluster
Optical Switching Platform
Application Cluster
Passport 8600
OPTera Metro 5200
  • A four site network in Chicago -- the first 10GE
    service trial!
  • A test bed for all-optical switching, advanced
    high-speed services, and and new applications
    including high-performance streaming media and
    collaborative applications for health-care,
    financial, and commercial services.
  • Partners SBC, Nortel, International Center for
    Advanced Internet Research (iCAIR) at
    Northwestern, Electronic Visualization Lab at
    Univ of Illinois, Argonne National Lab, CANARIE

OMNInet Technology Trial November 2001
15
Major Grid networking issues
  • QoS (Quality of Service)
  • still largely unresolved on a wide scale because
    of complexity of deployment
  • TCP/IP performance over high bandwidth, long
    distance networks
  • The loss of a single packet will affect a 10Gbps
    stream with 200ms RTT (round trip time) for 5
    hours. During that time the average throughput
    will be 7.5 Gbps.
  • End to end performance in the presence of
    firewalls
  • There is a lack of products, can we rely on
    products becoming available or should a new
    architecture be evolved?

16
Multiple Gigabit/second networking Facts,
Theory Practice
  • FACTS
  • Gigabit Ethernet (GBE) nearly ubiquitous
  • 10GBE coming very soon
  • 10Gbps circuits have been available for some time
    already in Wide Area Networks (WAN).
  • 40Gbps is in sight on WANs, but what after?
  • THEORY
  • 1GB file transferred in 11 seconds over a 1Gbps
    circuit ()
  • 1TB file transfer would still require 3 hours
  • and 1PB file transfer would require 4 months
  • PRACTICE
  • Rather disappointing results are obtained on high
    bandwidth, large RTT networks.
  • Multiple streams have become the norm
  • () according to the 75 empirical rule

17
Single stream vs Multiple streams effect of a
single packet loss (e.g. link error, buffer
overflow)
Streams/Throughput 10 5 1 7.5 4.375 2 9.375
10
Avg. 7.5 Gbps
Throughput Gbps
7 5
Avg. 6.25 Gbps
Avg. 4.375 Gbps
5
Avg. 3.75 Gbps
2.5
T 2.37 hours! (RTT200msec, MSS1500B)
Time
T
T
T
T
Write a Comment
User Comments (0)
About PowerShow.com