The Evolution of ESnet Joint Techs Summary - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

The Evolution of ESnet Joint Techs Summary

Description:

ESnet SF Bay Area. MAN Ring (Sept., 2005) 10 Gb/s. optical channels ... SF Bay Area MAN and West Coast SDN. New hubs. SDN/NLR hubs. GEANT (Europe) Asia-Pacific ... – PowerPoint PPT presentation

Number of Views:106
Avg rating:3.0/5.0
Slides: 29
Provided by: willia530
Category:

less

Transcript and Presenter's Notes

Title: The Evolution of ESnet Joint Techs Summary


1
The Evolution of ESnetJoint Techs Summary
  • William E. Johnston ESnet Manager and Senior
    Scientist
  • Lawrence Berkeley National Laboratory
  • (wej_at_es.net, www.es.net)

2
What Does ESnet Provide?
  • The purpose of ESnet is to support the science
    missions of the Department of EnergysOffice of
    Science, as well as other parts of DOE (mostly
    NNSA). To this end ESnet provides
  • Comprehensive physical and logical connectivity
  • High bandwidth access to DOE sites and DOEs
    primary science collaborators the Research and
    Education institutions in the US, Europe, Asia
    Pacific, and elsewhere
  • Full access to the global Internet for DOE Labs
    (160,000 routes from 180 peers at 40 peering
    points)
  • An architecture designed to move huge amounts of
    data between a small number of sites that are
    scattered all over the world
  • Full ISP services

3
ESnet Provides High-Speed Internet Connectivity
toDOE Facilities and Collaborators, Summer 2005
ESnet Science Data Network (SDN) core
SEA HUB
ESnet IP core
CHI-SL HUB
QWEST ATM
MAE-E
SNV HUB
Equinix
PAIX-PA Equinix, etc.
DC HUB
42 end user sites
Office Of Science Sponsored (22)
NNSA Sponsored (12)
International (high speed) 10 Gb/s SDN core 10G/s
IP core 2.5 Gb/s IP core MAN rings ( 10
G/s) OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3
(155 Mb/s) 45 Mb/s and less
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
ESnet IP core Packet over SONET Optical Ring and
Hubs
Laboratory Sponsored (6)
commercial and RE peering points

SND core hubs
IP core hubs
SNV HUB
high-speed peering points with Internet2/Abilene
4
DOE Office of Science Drivers for Networking
  • The large-scale science that is the mission of
    the Office of Science is dependent on networks
    for
  • Sharing of massive amounts of data
  • Supporting thousands of collaborators world-wide
  • Distributed data processing
  • Distributed simulation, visualization, and
    computational steering
  • Distributed data management
  • These issues were explored in two Office of
    Science workshops that formulated networking
    requirements to meet the needs of the science
    programs (see refs.)

5
Evolving Quantitative Science Requirements for
Networks
6
Observed Drivers for the Evolution of ESnet
ESnet is Currently Transporting About 530
Terabytes/mo.and this volume is increasing
exponentially
ESnet Monthly Accepted Traffic Feb., 1990 May,
2005
TBytes/Month
7
Observed Drivers for the Evolution of ESnet
ESnet traffic has increased by 10X every 46
months, on average, since 1990
Dec., 2001
Jul., 1998
42 months
TBytes/Month
57 months
Oct., 1993
Aug., 1990
39 months
8
ESnet Science Traffic
  • The top 100 ESnet flows consistently account for
    25 - 40 of ESnets monthly total traffic
    these are the result of DOEs Office of Science
    large-scale science projects
  • Top 100 are 100-150 Terabytes out of about 550
    Terabytes
  • The other 60-75 of the ESnet monthly traffic is
    in 6,000,000,000 flows
  • As LHC (CERN high energy physics accelerator)
    data starts to move, the large science flows will
    increase a lot (200-2000 times)
  • Both LHC, US tier 1 data centers are at DOE Labs
    Fermilab and Brookhaven
  • All of the data from the two major LHC
    experiments CMS and Atlas will be stored at
    these centers for analysis by groups at US
    universities

9
Source and Destination of the Top 30 Flows, Feb.
2005
DOE Lab-International RE
Lab-U.S. RE (domestic)
12
SLAC (US) ? RAL (UK)
Lab-Lab (domestic)
Fermilab (US) ? WestGrid (CA)
Lab-Comm. (domestic)
10
Terabytes/Month
8
SLAC (US) ? IN2P3 (FR)
LIGO (US) ? Caltech (US)
6
SLAC (US) ? Karlsruhe (DE)
Fermilab (US) ? U. Texas, Austin (US)
SLAC (US) ? INFN CNAF (IT)
LLNL (US) ? NCAR (US)
Fermilab (US) ? Johns Hopkins
Fermilab (US) ? Karlsruhe (DE)
Fermilab (US) ? UC Davis (US)
Fermilab (US) ? SDSC (US)
Fermilab (US) ? U. Toronto (CA)
IN2P3 (FR) ? Fermilab (US)
U. Toronto (CA) ? Fermilab (US)
Fermilab (US) ? MIT (US)
LBNL (US) ? U. Wisc. (US)
4
Qwest (US) ? ESnet (US)
DOE/GTN (US) ? JLab (US)
NERSC (US) ? LBNL (US)
CERN (CH) ? Fermilab (US)
NERSC (US) ? LBNL (US)
NERSC (US) ? LBNL (US)
NERSC (US) ? LBNL (US)
BNL (US) ? LLNL (US)
NERSC (US) ? LBNL (US)
BNL (US) ? LLNL (US)
CERN (CH) ? BNL (US)
BNL (US) ? LLNL (US)
BNL (US) ? LLNL (US)
2
0
10
DOE Science Requirements for Networking
  • Network bandwidth must increase substantially,
    not just in the backbone but all the way to the
    sites and the attached computing and storage
    systems
  • A highly reliable network is critical for science
    when large-scale experiments depend on the
    network for success, the network must not fail
  • There must be network services that can guarantee
    various forms of quality-of-service (e.g.,
    bandwidth guarantees) and provide traffic
    isolation
  • A production, extremely reliable, IP network with
    Internet services must support the process of
    science

11
Strategy For The Evolution of ESnet
  • A three part strategy for the evolution of ESnet
  • 1) Metropolitan Area Network (MAN) rings to
    provide
  • dual site connectivity for reliability
  • much higher site-to-core bandwidth
  • support for both production IP and circuit-based
    traffic
  • 2) A Science Data Network (SDN) core for
  • provisioned, guaranteed bandwidth circuits to
    support large, high-speed science data flows
  • very high total bandwidth
  • multiply connecting MAN rings for protection
    against hub failure
  • alternate path for production IP traffic
  • 3) A High-reliability IP core (e.g. the current
    ESnet core) to address
  • general science requirements
  • Lab operational requirements
  • Backup for the SDN core
  • vehicle for science services

12
Strategy For The Evolution of ESnetTwo Core
Networks and Metro. Area Rings
CERN
Asia-Pacific
GEANT (Europe)
Science Data Network Core (SDN) (NLR circuits)
Seattle
Chicago
Australia
New York
Sunnyvale
Washington, DC
IP Core (Qwest)
Atlanta
LA
Aus.
Albuquerque
San Diego
El Paso
MetropolitanArea Rings
IP core hubs
Production IP core Science Data Network
core Metropolitan Area Networks Lab
supplied International connections
SDN/NLR hubs
Primary DOE Labs
New hubs
13
First Two Steps in the Evolution of ESnet
  • 1) The SF Bay Area MAN will provide to the five
    OSC Bay Area sites
  • Very high speed site access 20 Gb/s
  • Fully redundant site access
  • 2) The first two segments of the second
    national10 Gb/s core the Science Data Network
    will be San Diego to Sunnyvale to Seattle

14
ESnet SF Bay AreaMAN Ring (Sept., 2005)
IP core to Chicago (Qwest)
SDN to Seattle (NLR)
  • 2 ?s (2 X 10 Gb/s channels) in a ring
    configuration, and delivered as 10 GigEther
    circuits
  • 10-50X current site bandwidth
  • Dual site connection (independent east and
    west connections) to each site
  • Will be used as a 10 Gb/s production IP ring
    and2 X 10 Gb/s paths (for circuit services) to
    each site
  • Qwest contract signed for two lambdas 2/2005 with
    options on two more
  • Project completion date is 9/2005

10 Gb/soptical channels
Joint Genome Institute
LBNL
?1 production IP
?2 SDN/circuits
NERSC
?3 future
?4 future
SF Bay Area
LLNL
SNLL
SLAC
ESnet MAN ring (Qwest circuits)
Qwest /ESnet hub
Level 3hub
DOE Ultra Science Net(research net)
ESnet hubs and sites
SDN to San Diego
NASAAmes
IP core to El Paso
15
SF Bay Area MAN Typical Site Configuration
Site LAN
0-10 Gb/sdrop-offIP traffic
site
Site
West ?1 and ?2
nx1GEor 10GEIP
ESnet
6509
SF BAMAN
1 or 2 x 10 GE (provisioned circuitsvia VLANS)
East ?1 and ?2
0-20 Gb/sVLAN traffic
0-10 Gb/spass-throughVLAN traffic
0-10 Gb/spass-throughIP traffic
24 x 1 GE line cards
4 x 10 GE line cards (using 2 ports max.
percard)
16
Evolution of ESnet Step OneSF Bay Area MAN
and West Coast SDN
CERN
Asia-Pacific
GEANT (Europe)
Science Data Network Core (SDN) (NLR circuits)
Seattle
Chicago
Australia
New York
Sunnyvale
Washington, DC
IP Core (Qwest)
Atlanta
Aus.
LA
San Diego
Albuquerque
El Paso
MetropolitanArea Rings
IP core hubs
Production IP core Science Data Network
core Metropolitan Area Networks Lab
supplied International connections
SDN/NLR hubs
Primary DOE Labs
In service by Sept., 2005 planned
New hubs
17
Evolution Next Steps
  • ORNL 10G circuit to Chicago
  • Chicago MAN
  • IWire partnership
  • Long Island MAN
  • Try and achieve some diversity in NYC by
    including a hub at 60 Hudson as well as 32 AoA
  • More SDN segments
  • Jefferson Lab via MATP and VORTEX

18
New Network Services
  • New network services are also critical for ESnet
    to meet the needs of large-scale science
  • Most important new network service is dynamically
    provisioned virtual circuits that provide
  • Traffic isolation
  • will enable the use of high-performance,
    non-standard transport mechanisms that cannot
    co-exist with commodity TCP based transport(see,
    e.g., Tom Dunigans compendium http//www.csm.ornl
    .gov/dunigan/netperf/netlinks.html )
  • Guaranteed bandwidth
  • the only way that we have currently to address
    deadline scheduling e.g. where fixed amounts of
    data have to reach sites on a fixed schedule in
    order that the processing does not fall behind
    far enough so that it could never catch up very
    important for experiment data analysis

19
OSCARS Guaranteed Bandwidth Service
  • Testing OSCARS Label Switched Paths (MPLS based
    virtual circuits)
  • (update in the panel discussion)
  • A collaboration with the other major science RE
    networks to ensure compatible services (so that
    virtual services can be set up end-to-end across
    ESnet, Abilene, and GEANT)
  • code is being jointly developed with Internet2's
    Bandwidth Reservation for User Work (BRUW)
    project part of the Abilene HOPI (Hybrid
    Optical-Packet Infrastructure) project
  • Close cooperation with the GEANT virtual circuit
    project(lightpaths Joint Research Activity 3
    project)

20
Federated Trust Services
  • Remote, multi-institutional, identity
    authentication is critical for distributed,
    collaborative science in order to permit sharing
    computing and data resources, and other Grid
    services
  • Managing cross site trust agreements among many
    organizations is crucial for authentication in
    collaborative environments
  • ESnet assists in negotiating and managing the
    cross-site, cross-organization, and international
    trust relationships to provide policies that are
    tailored to collaborative science
  • The form of the ESnet trust services are driven
    entirely by the requirements of the science
    community and direct input from the science
    community

21
ESnet Public Key Infrastructure
  • ESnet provides Public Key Infrastructure and
    X.509 identity certificates that are the basis of
    secure, cross-site authentication of people and
    Grid systems
  • These services (www.doegrids.org) provide
  • Several Certification Authorities (CA) with
    different uses and policies that issue
    certificates after validating request against
    policy
  • This service was the basis of the first routine
    sharing of HEP computing resources between US and
    Europe

22
ESnet Public Key Infrastructure
  • Root CA is kept off-line in a vault
  • Subordinate CAs are kept in locked, alarmed racks
    in an access controlled machine room and have
    dedicated firewalls
  • CAs with different policies as required by the
    science community
  • DOEGrids CA has a policy tailored to accommodate
    international science collaboration
  • NERSC CA policy integrates CA and certificate
    issuance with NIM (NERSC user accounts management
    services)
  • FusionGrid CA supports the FusionGrid roaming
    authentication and authorization services,
    providing complete key lifecycle management

ESnet root CA
DOEGrids CA
NERSC CA
FusionGrid CA
CA
23
DOEGrids CA (one of several CAs) Usage Statistics
Report as of Jan 11,2005
FusionGRID CA certificates not included here.
24
DOEGrids CA Usage - Virtual Organization Breakdown

DOE-NSF collab.
25
North American Policy Management Authority
  • The Americas Grid, Policy Management Authority
  • An important step toward regularizing the
    management of trust in the international science
    community
  • Driven by European requirements for a single Grid
    Certificate Authority policy representing
    scientific/research communities in the Americas
  • Investigate Cross-signing and CA Hierarchies
    support for the science community
  • Investigate alternative authentication services
  • Peer with the other Grid Regional Policy
    Management Authorities (PMA).
  • European Grid PMA www.eugridpma.org
  • Asian Pacific Grid PMA www.apgridpma.org
  • Started in Fall 2004 www.TAGPMA.org
  • Founding members
  • DOEGrids (ESnet)
  • Fermi National Accelerator Laboratory
  • SLAC
  • TeraGrid (NSF)
  • CANARIE (Canadian national RE network)

26
Federated Crypto Token Services
  • Strong authentication is needed to reduce the
    risk of identity theft
  • Identity theft was the mechanism of the
    successful attacks on US supercomputers in spring
    2004
  • RADIUS Authentication Fabric pilot project (RAF)
  • For enabling strong, cross-site authentication
  • Cryptographic tokens (e.g. RSA SecurID cards) are
    effective, but every site uses a different
    approach
  • ESnet has developed a federation service for
    crypto tokens (SecurID, CRYPTOCard, etc.)

27
ESnet RADIUS Authentication Fabric
  • What is the RAF?
  • Access to an application (e.g. system login) is
    based on authentication info. provided by the
    token and the user's home site identity
  • Hierarchy of RADIUS servers that
  • route authentication queries from an application
    (e.g. a login process) at one site to a One-Time
    Password (OTP) service at the users home site
  • the home site can then authenticate the user
  • outsourcing the routing reduces inter site
    connection management from O(n2) to O(n)
  • A collection of cross site trust agreements

28
References DOE Network Related Planning
Workshops
  • 1) High Performance Network Planning Workshop,
    August 2002
  • http//www.doecollaboratory.org/meetings/hpnpw
  • 2) DOE Science Networking Roadmap Meeting, June
    2003
  • http//www.es.net/hypertext/welcome/pr/Roadmap/ind
    ex.html
  • 3) DOE Workshop on Ultra High-Speed Transport
    Protocols and Network Provisioning for
    Large-Scale Science Applications, April 2003
  • http//www.csm.ornl.gov/ghpn/wk2003
  • 4) Science Case for Large Scale Simulation, June
    2003
  • http//www.pnl.gov/scales/
  • 5) Workshop on the Road Map for the
    Revitalization of High End Computing, June 2003
  • http//www.cra.org/Activities/workshops/nitrd
  • http//www.sc.doe.gov/ascr/20040510_hecrtf.pdf
    (public report)
  • 6) ASCR Strategic Planning Workshop, July 2003
  • http//www.fp-mcs.anl.gov/ascr-july03spw
  • 7) Planning Workshops-Office of Science
    Data-Management Strategy, March May 2004
  • http//www-conf.slac.stanford.edu/dmw2004
Write a Comment
User Comments (0)
About PowerShow.com