ESnet%20Defined:%20%20Challenges%20and%20Overview%20%20%20Department%20of%20Energy%20Lehman%20Review%20of%20ESnet%20February%2021-23,%202006 - PowerPoint PPT Presentation

About This Presentation
Title:

ESnet%20Defined:%20%20Challenges%20and%20Overview%20%20%20Department%20of%20Energy%20Lehman%20Review%20of%20ESnet%20February%2021-23,%202006

Description:

The Office of Science (SC) is the single largest supporter of basic research in ... Site gateway router. ESnet production IP core hub. Site. Large Science Site ... – PowerPoint PPT presentation

Number of Views:48
Avg rating:3.0/5.0
Slides: 37
Provided by: ces112
Category:

less

Transcript and Presenter's Notes

Title: ESnet%20Defined:%20%20Challenges%20and%20Overview%20%20%20Department%20of%20Energy%20Lehman%20Review%20of%20ESnet%20February%2021-23,%202006


1
ESnet4Networking for the Futureof DOE Science
CEFNet Workshop, Sept. 19, 2007
William E. Johnston ESnet Department Head and
Senior Scientist
Energy Sciences Network Lawrence Berkeley
National Laboratory
wej_at_es.net, www.es.net This talk is available at
www.es.net
Networking for the Future of Science
2
DOEs Office of Science Enabling Large-Scale
Science
  • The Office of Science (SC) is the single largest
    supporter of basic research in the physical
    sciences in the United States, providing more
    than 40 percent of total funding for the
    Nations research programs in high-energy
    physics, nuclear physics, and fusion energy
    sciences. (http//www.science.doe.gov) SC funds
    25,000 PhDs and PostDocs
  • A primary mission of SCs National Labs is to
    build and operate very large scientific
    instruments - particle accelerators, synchrotron
    light sources, very large supercomputers - that
    generate massive amounts of data and involve very
    large, distributed collaborations
  • ESnet - the Energy Sciences Network - is an SC
    program whose primary mission is to enable the
    large-scale science of the Office of Science that
    depends on
  • Sharing of massive amounts of data
  • Supporting thousands of collaborators world-wide
  • Distributed data processing
  • Distributed data management
  • Distributed simulation, visualization, and
    computational steering
  • Collaboration with the US and International
    Research and Education community
  • In addition to the National Labs, ESnet serves
    much of the rest of DOE, including NNSA about
    75,000-100,000 users in total

3
What ESnet Is
  • A large-scale IP network built on a national
    circuit infrastructure with high-speed
    connections to all major US and international
    research and education (RE) networks
  • An organization of 30 professionals structured
    for the service
  • The ESnet organization builds and operates the
    network
  • An operating entity with an FY06 budget of 26.6M
  • A tier 1 ISP (direct peerings will all major
    networks)
  • The primary DOE network provider
  • Provides production Internet service to all of
    the major DOE Labs and most other DOE sites
  • Based on DOE Lab populations, it is estimated
    that between 50,000 -100,000 users depend on
    ESnet for global Internet access
  • additionally, each year more than 18,000 non-DOE
    researchers from universities, other government
    agencies, and private industry use Office of
    Science facilities

PNNL supplements its ESnet service with
commercial service
4
ESnet History
ESnet0/MFENet mid-1970s-1986 ESnet0/MFENet 56 Kbps microwave and satellite links
ESnet11986-1995 ESnet formed to serve the Office of Science 56 Kbps, X.25 to45 Mbps T3
ESnet21995-2000 Partnered with Sprint to build the first national footprint ATM network IP over 155 Mbps ATM net
ESnet32000-2007 Partnered with Qwest to build a national Packet over SONET network and optical channel Metropolitan Area fiber IP over 10Gbps SONET
ESnet42007-2012 Partner with Internet2 and US Research Education community to build a dedicated national optical network IP and virtual circuits on a configurable optical infrastructure with at least 5-6 optical channels of 10-100 Gbps each
5
Japan (SINet) Australia (AARNet) Canada
(CAnet4 Taiwan (TANet2) Singaren
CAnet4 France GLORIAD (Russia, China)Korea
(Kreonet2
KAREN/REANNZ ODN Japan Telecom America NLR-Packetn
et Abilene/I2
MREN StarTapTaiwan (TANet2, ASCC)
PacificWaveNLR
SEA
AU
CHI-SL
StarlightUSLHCNetNLR
NYC
MIT
Denver
BNL
Lab DC Offices
FNAL
JGI
LNOC
ANL
LLNL
LBNL
SNLL
NETL
AMES
MAE-E
SNV
CHI
NERSC
SLAC
NREL
SNV SDN
JLAB
ORNL
IARC
DC
LANL
YUCCA MT
SDSC
GA
AU
ALB
ATL
DOE-ALB
ELP
ESnet Provides Global High-Speed
InternetConnectivity for DOE Facilities and
Collaborators (2007)
45 end sites
commercial peering points
Specific RE network peers Other RE peering
points

ESnet core hubs
IP
SNV
Internet2
high-speed peering points with Internet2
6
ESnet FY06 Budget is Approximately 26.6M
Approximate Budget Categories
Target carryover 1.0M
Special projects (Chicago and LI MANs) 1.2M
SC RD 0.5M
Carryover 1M
Management and compliance 0.7M
SC Special Projects 1.2M
Other DOE 3.8M
Collaboration services 1.6
Internal infrastructure, security, disaster
recovery 3.4M
Circuits hubs 12.7M
SC operating20.1M
Operations 1.1M
Engineering research 2.9M
WAN equipment 2.0M
Total expenses 26.6M
Total funds 26.6M
7
A Changing Science Environment is the Key Driver
of the Next Generation ESnet
  • Large-scale collaborative science big
    facilities, massive data, thousands of
    collaborators is now a significant aspect of
    the Office of Science (SC) program
  • SC science community is almost equally split
    between Labs and universities
  • SC facilities have users worldwide
  • Very large international (non-US) facilities
    (e.g. LHC and ITER) and international
    collaborators are now a key element of SC science
  • Distributed systems for data analysis,
    simulations, instrument operation, etc., are
    essential and are now common (in fact dominate
    data analysis that now generates 50 of all ESnet
    traffic)

8
Science Networking Requirements Aggregation
Summary
Science Drivers Science Areas / Facilities End2End Reliability Connectivity Today End2End Band width 5 years End2End Band width Traffic Characteristics Network Services
Magnetic Fusion Energy 99.999 (Impossible without full redundancy) DOE sites US Universities Industry 200 Mbps 1 Gbps Bulk data Remote control Guaranteed bandwidth Guaranteed QoS Deadline scheduling
NERSC and ACLF - DOE sites US Universities International Other ASCR supercomputers 10 Gbps 20 to 40 Gbps Bulk data Remote control Remote file system sharing Guaranteed bandwidth Guaranteed QoS Deadline Scheduling PKI / Grid
NLCF - DOE sites US Universities Industry International Backbone Band width parity Backbone band width parity Bulk data Remote file system sharing
Nuclear Physics (RHIC) - DOE sites US Universities International 12 Gbps 70 Gbps Bulk data Guaranteed bandwidth PKI / Grid
Spallation Neutron Source High (24x7 operation) DOE sites 640 Mbps 2 Gbps Bulk data
9
Science Network Requirements Aggregation Summary
Science Drivers Science Areas / Facilities End2End Reliability Connectivity Today End2End Band width 5 years End2End Band width Traffic Characteristics Network Services
Advanced Light Source - DOE sites US Universities Industry 1 TB/day 300 Mbps 5 TB/day 1.5 Gbps Bulk data Remote control Guaranteed bandwidth PKI / Grid
Bioinformatics - DOE sites US Universities 625 Mbps 12.5 Gbps in two years 250 Gbps Bulk data Remote control Point-to-multipoint Guaranteed bandwidth High-speed multicast
Chemistry / Combustion - DOE sites US Universities Industry - 10s of Gigabits per second Bulk data Guaranteed bandwidth PKI / Grid
Climate Science - DOE sites US Universities International - 5 PB per year 5 Gbps Bulk data Remote control Guaranteed bandwidth PKI / Grid
High Energy Physics (LHC) 99.95 (Less than 4 hrs/year) US Tier1 (FNAL, BNL) US Tier2 (Universities) International (Europe, Canada) 10 Gbps 60 to 80 Gbps (30-40 Gbps per US Tier1) Bulk data Coupled data analysis processes Guaranteed bandwidth Traffic isolation PKI / Grid
Immediate Requirements and Drivers
10
The Next Level of Detail In Network Requirements
  • Consider the LHC networking as an example,
    because their requirements are fairly well
    characterized

11
LHC will be the largest scientific experiment and
generate the most data the scientific community
has ever tried to manage.The data management
model involves a world-wide collection of data
centers that store, manage, and analyze the data
and that are integrated through network
connections with typical speeds in the 10 Gbps
range.
closely coordinated and interdependent
distributed systems that must have predictable
intercommunication for effective functioning
ICFA SCIC
12
The current, pre-production data movement sets
the scale of the LHC distributed data analysis
problem The CMS experiment at LHC is already
moving 32 petabytes/yr among the Tier 1 and Tier
2 sites.
Accumulated data (Terabytes) moved among the CMS
Data Centers (tier1 sites)and analysis centers
(tier2 sites) during the past four months (8
petabytes of data)
LHC/CMS
13
Estimated Aggregate Link Loadings, 2007-08
unlabeled links are 10 Gb/s
9
12.5
Seattle
13
13
9
Portland
2.5
Existing site supplied circuits
Boise
Boston
Chicago
Clev.
NYC
Denver
Sunnyvale
Philadelphia
KC
Salt Lake City
Pitts.
Wash DC
Indianapolis
8.5
Raleigh
Tulsa
LA
Nashville
Albuq.
OC48
6
(1(3))
San Diego
(1)
6
Atlanta
Jacksonville
El Paso
BatonRouge
Houston
2.5
2.5
2.5
Committed bandwidth, Gb/s
14
ESnet4 2007-8 Estimated Bandwidth Commitments
Long Island MAN
600 W. Chicago
West Chicago MAN
unlabeled links are 10 Gb/s
5
Seattle
10
(28)
Portland
(8)
CERN
Starlight
13
Boise
(29)
Boston
(9)
29(total)
Chicago
(7)
Clev.
(10)
(11)
NYC
(25)
(32)
(13)
Denver
Sunnyvale
(12)
10
Philadelphia
USLHCNet
KC
Salt Lake City
(14)
(15)
(26)
Pitts.
(16)
Wash DC
San FranciscoBay Area MAN
Indianapolis
(27)
(21)
(0)
(23)
(22)
(30)
Raleigh
FNAL
Tulsa
LA
Nashville
ANL
Albuq.
OC48
(1(3))
(3)
(24)
(4)
San Diego
Newport News - Elite
(1)
Atlanta
(2)
(20)
(19)
Jacksonville
El Paso
(17)
(6)
BatonRouge
(5)
MAX
Houston
All circuits are 10Gb/s.
2.5
Committed bandwidth, Gb/s
15
Are These Estimates Realistic? YES!
Mbytes/sec
FNAL Outbound CMS TrafficMax 1064 MBy/s (8.5
Gb/s), Average 394 MBy/s (3.2 Gb/s)
16
Footprint of Largest Office of Science Labs Data
Sharing Collaborators Large-Scale Science is an
International Endeavor
  • Top 100 data flows generate 50 of all ESnet
    traffic (ESnet handles about 3x109 flows/mo.)
  • 91 of the top 100 flows are from the Labs to
    other institutions (shown) (CY2005 data)

17
Observed Evolution of Historical ESnet Traffic
Patterns
ESnet total traffic passed2 Petabytes/mo about
mid-April, 2007
ESnet Traffic has Increased by10X Every 47
Months, on Average, Since 1990
Terabytes / month
top 100 sites to siteworkflows
site to siteworkflow data not available
  • ESnet Monthly Accepted Traffic, January, 2000
    June, 2007
  • ESnet is currently transporting more than1
    petabyte (1000 terabytes) per month
  • More than 50 of the traffic is now generated by
    the top 100 sites ?large-scale science dominates
    all ESnet traffic

18
ESnet Traffic has Increased by10X Every 47
Months, on Average, Since 1990
Apr., 2006 1 PBy/mo.
Nov., 2001 100 TBy/mo.
Jul., 1998 10 TBy/mo.
53 months
Oct., 1993 1 TBy/mo.
40 months
Terabytes / month
Aug., 1990 100 MBy/mo.
57 months
38 months
Log Plot of ESnet Monthly Accepted Traffic,
January, 1990 June, 2007
19
Summary of All Requirements To-Date
  • Requirements from science case studies
  • Build the ESnet core up to 100 Gb/s within 5
    years
  • Provide high level of reliability
  • Deploy initial network to accommodate LHC
    collaborator footprint
  • Implement initial network to provide for LHC data
    path loadings
  • Provide the network as a service-oriented
    capability
  • Requirements from observing traffic growth and
    change trends in the network
  • Most of ESnet science traffic has a source or
    sink outside of ESnet
  • Requires high-bandwidth peering
  • Reliability and bandwidth requirements demand
    that peering be redundant
  • Bandwidth and service guarantees must traverse
    RE peerings
  • Provide 15 Gb/s core within four years and 150
    Gb/s core within eight years
  • Provide a rich diversity and high bandwidth for
    RE peerings
  • Economically accommodate a very large volume of
    circuit-like traffic
  • Large-scale science is now the dominant user of
    the network
  • Satisfying the demands of large-scale science
    traffic into the future will require a
    purpose-built, scalable architecture
  • Requirements from SC Programs
  • Provide consulting on system / application
    network tuning

20
The Evolution of ESnet Architecture
Metro Area Rings
ESnet IP core
ESnet IP core
  • ESnet to 2005
  • A routed IP network with sites singly attached
    to a national core ring

ESnet Science Data Network (SDN) core
  • ESnet from 2006-07
  • A routed IP network with sites dually connected
    on metro area rings or dually connected directly
    to core ring
  • A switched network providing virtual circuit
    services for data-intensive science
  • Rich topology offsets the lack of dual,
    independent national cores

ESnet sites
ESnet hubs / core network connection points
Metro area rings (MANs)
Circuit connections to otherscience networks
(e.g. USLHCNet)
21
ESnet Metropolitan Area Network Ring Architecture
for High Reliability Sites
SDNcoreeast
ESnet production IP core hub
IP coreeast
IP corewest
IP core router
SDN corewest
ESnetIP core hub
ESnet SDNcore hub
MAN fiber ring 2-4 x 10 Gbps channels
provisioned initially,with expansion capacity to
16-64
ESnet managedvirtual circuit services tunneled
through the IP backbone
Large Science Site
ESnet production IP service
ESnet managed? / circuit services
ESnet MANswitch
Independentport card supportingmultiple 10 Gb/s
line interfaces
Site
ESnet switch
Virtual Circuits to Site
Virtual Circuit to Site
Siterouter
Site gateway router
SDN circuitsto site systems
Site LAN
Site edge router
22
ESnet4
  • Internet2 has partnered with Level 3
    Communications Co. and Infinera Corp. for a
    dedicated optical fiber infrastructure with a
    national footprint and a rich topology - the
    Internet2 Network
  • The fiber will be provisioned with Infinera Dense
    Wave Division Multiplexing equipment that uses an
    advanced, integrated optical-electrical design
  • Level 3 will maintain the fiber and the DWDM
    equipment
  • ESnet has partnered with Internet2 to
  • Share the optical infrastructure, but build
    independent L2/3 networks
  • Develop new circuit-oriented network services
  • Explore mechanisms that could be used for the
    ESnet Network Operations Center (NOC) and the
    Internet2/Indiana University NOC to back each
    other up for disaster recovery purposes

23
L1 / Optical Overlay - Internet2 and ESnet
Optical Node
ESnet
Internet2
IPcore
ESnetmetro-areanetworks
groomingdevice
CienaCoreDirector
dynamically allocated and routed waves (future)
  • support devices
  • measurement
  • out-of-band access
  • monitoring
  • security
  • support devices
  • measurement
  • out-of-band access
  • monitoring
  • .

Network TestbedImplemented as aOptical Overlay
fiber east
fiber west
Internet2/Level3/InfineraNational Optical
Infrastructure
Infinera DTN
fiber north/south
24
Conceptual Architecture of the Infinera DTN
Digital Terminal
  • Band Mux Module (BMM)
  • Multiplexes 100Gb/s bands onto400 Gb/s or 800
    Gb/s line
  • Digital Line Module (DLM)
  • 100Gb/s DWDM line module (10 l x 10Gb/s)
  • Integrated digital switch enables add/drop
    grooming at 2.5Gb/s (ODU-1) granularity
  • Tributary Adapter Module (TAM)
  • Provides customer facing optical interface
  • Supports variety of data rates and service types
  • Up to 5 TAMs per DLM
  • Tributary Optical Module (TOM)
  • Pluggable client side optical module (XFP, SFP)
  • Supports 10GE, OC48OC192 SONET
  • Optical Supervisory Channel (OSC)
  • Management Plane TrafficIncludes traffic from
    the remote management systems to access network
    elements for the purpose of managing them
  • Control Plane TrafficGMPLS routing and signaling
    control protocol traffic
  • Datawire TrafficCustomer management traffic by
    interconnecting customers 10Mbps Ethernet LAN
    segments at various sites through AUX port
    interfaces

intra- andinter-DLM switch fabric
DLM10x10 Gb/s
100 Gb/s
10 Gb/s
100 Gb/s
BMM
WAN
100 Gb/s
100 Gb/s
Control Processor
  • Notes
  • This conceptual architecture is based on W.
    Johnstons understanding of the Infinera DTN
  • All channels are full duplex, which is not made
    explicit here
  • The switch fabric operates within a DLM and also
    interconnects the DLMs
  • The Management Control Module is not shown

25
ESnet 4 Backbone September 30, 2007
Seattle
Boston
Boston
Boise
New York City
Clev.
Clev.
Sunnyvale
Denver
Chicago
Washington DC
Kansas City
Los Angeles
Nashville
San Diego
Albuquerque
Atlanta
El Paso
Houston
Houston

26
Typical ESnet4 Wide Area Network Hub
Washington, DC
27
Typical Medium Size ESnet4 Hub
DC power controllers
DC power controllers
local Ethernet
10 Gbps network tester
secure terminal server (telephone modem access)
Cisco 7609,Science Data Network switch (about
365K, list)
Juniper M7i,IP peering router, (about 60K, list)
Juniper M320, core IP router, (about 1.65M, list)
28
Note that the major ESnet sites are now
directly on the ESnet core network
Long Island MAN
West Chicago MAN
e.g. the bandwidth into and out of FNAL is equal
to, or greater, than the ESnet core bandwidth
Seattle
(28)
(gt1 ?)
Portland
(8)
5?
Boise
(29)
Boston
(9)
5?
Chicago
(7)
Clev.
4?
5?
(10)
(11)
NYC
Pitts.
5?
(25)
(32)
(13)
Denver
5?
Sunnyvale
(12)
Philadelphia
(14)
KC
Salt Lake City
(15)
5?
5?
(26)
4?
(16)
Wash. DC
(21)
(27)
4?
5?
Indianapolis
4?
(23)
3?
(22)
(30)
(0)
Raleigh
5?
Tulsa
LA
Nashville
4?
Albuq.
OC48
4?
(24)
4?
(4)
3?
(3)
San Diego
3?
(1)
Atlanta
(2)
(20)
(19)
Jacksonville
4?
El Paso
4?
(17)
(6)
BatonRouge
(5)
Houston
29
Aggregate Estimated Link Loadings, 2010-11
30
45
Seattle
50
20
(28)
15
(gt1 ?)
Portland
(8)
5?
Boise
(29)
Boston
(9)
5?
Chicago
(7)
Clev.
4?
5?
(10)
(11)
NYC
Pitts.
5?
(25)
(32)
(13)
Denver
5?
Sunnyvale
(12)
Philadelphia
(14)
KC
Salt Lake City
(15)
5?
5?
(26)
4?
(16)
10
Wash. DC
(21)
(27)
4?
5?
5
Indianapolis
4?
(23)
3?
(22)
(30)
(0)
Raleigh
5?
Tulsa
5
LA
Nashville
4?
20
Albuq.
OC48
4?
(24)
4?
(4)
3?
(3)
San Diego
3?
(1)
Atlanta
20
20
5
(2)
(20)
(19)
Jacksonville
4?
El Paso
4?
(17)
(6)
20
BatonRouge
(5)
Houston
ESnet SDN switch hubs
30
ESnet4 2010-11 Estimated Bandwidth Commitments
CMS
25
20
Seattle
25
(28)
15
(gt1 ?)
Portland
(8)
5?
Boise
(29)
Boston
(9)
5?
Chicago
(7)
Clev.
4?
5?
(10)
(11)
NYC
Pitts.
5?
(25)
(32)
(13)
Denver
5?
Sunnyvale
(12)
Philadelphia
(14)
KC
Salt Lake City
(15)
5?
5?
(26)
4?
(16)
Wash. DC
(21)
(27)
4?
5?
5
Indianapolis
4?
(23)
3?
(22)
(30)
(0)
Raleigh
5?
Tulsa
5
LA
Nashville
4?
10
Albuq.
OC48
4?
(24)
4?
(4)
3?
(3)
San Diego
3?
(1)
Atlanta
20
20
5
(2)
(20)
(19)
Jacksonville
4?
El Paso
4?
(17)
(6)
10
BatonRouge
(5)
Houston
ESnet SDN switch hubs
31
ESnet4 IP SDN, 2011 Configuration
Seattle
(28)
(gt1 ?)
Portland
(8)
5?
Boise
(29)
Boston
(9)
5?
Chicago
(7)
Clev.
4?
5?
(10)
(11)
NYC
Pitts.
5?
(25)
(32)
(13)
Denver
5?
Sunnyvale
(12)
Philadelphia
(14)
KC
Salt Lake City
(15)
5?
5?
(26)
4?
(16)
Wash. DC
(21)
(27)
4?
5?
Indianapolis
4?
(23)
3?
(22)
(30)
(0)
Raleigh
5?
Tulsa
LA
Nashville
4?
Albuq.
OC48
4?
(24)
4?
(4)
3?
(3)
San Diego
3?
(1)
Atlanta
(2)
(20)
(19)
Jacksonville
4?
El Paso
4?
(17)
(6)
BatonRouge
(5)
Houston
32
ESnet4 Planed Configuration40-50 Gbps in
2009-2010, 160-400 Gbps in 2011-2012
Canada (CANARIE)
CERN (30 Gbps, each path)
Canada (CANARIE)
Asia-Pacific
Asia Pacific
GLORIAD (Russia and China)
Europe (GÉANT)
Asia-Pacific
Science Data Network Core
Seattle
Cleveland
Boston
Chicago
IP Core
Boise
Australia
New York
Kansas City
Denver
Washington DC
Sunnyvale
Atlanta
Tulsa
Albuquerque
LA
Australia
South America (AMPATH)
San Diego
Houston
South America (AMPATH)
Jacksonville
El Paso
Core network fiber path is 14,000 miles /
24,000 km
33
New Network Service Virtual Circuits
  • The control plane and user interface for ESnet
    version of light paths
  • Service Requirements
  • Guaranteed bandwidth service
  • User specified bandwidth - requested and managed
    in a Web Services framework
  • Traffic isolation and traffic engineering
  • Provides for high-performance, non-standard
    transport mechanisms that cannot co-exist with
    commodity TCP-based transport
  • Enables the engineering of explicit paths to meet
    specific requirements
  • e.g. bypass congested links, using lower
    bandwidth, lower latency paths
  • End-to-end (cross-domain) connections between
    Labs and collaborating institutions
  • Secure connections
  • Provides end-to-end connections between Labs and
    collaborator institutions
  • The circuits are secure to the edges of the
    network (the site boundary) because they are
    managed by the control plane of the network which
    is isolated from the general traffic
  • Reduced cost of handling high bandwidth data
    flows
  • Highly capable routers are not necessary when
    every packet goes to the same place
  • Use lower cost (factor of 5x) switches to
    relatively route the packets

34
The Mechanisms Underlying OSCARS
Based on Source and Sink IP addresses, route of
LSP between ESnet border routers is determined
using topology information from OSPF-TE. Path of
LSP can be explicitly directed to take SDN
network. On the SDN Ethernet switches all traffic
is MPLS switched (layer 2.5), which stitches
together VLANs
VLAN 1
VLAN 3
VLAN 2
On ingress to ESnet, packets matching reservation
profile are filtered out (i.e. policy based
routing), policed to reserved bandwidth, and
injected into a LSP.
SDN
SDN
SDN
SDN Link
SDN Link
RSVP, MPLS enabled on internal interfaces
Sink
Label Switched Path
IP Link
Source
IP
IP
IP
IP Link
high-priority queue
standard,best-effortqueue
MPLS labels are attached to packets from Source
and placed in separate queue to ensure guaranteed
bandwidth.
Regular production traffic queue.
Interface queues
35
Conclusions from Measurements On Hybrid
Dedicated Bandwidth Connections, Rao, et al
  • Test scenario is ESnet-USN
  • MPLS tunnels through routed IP network
  • Ethernet over SONET
  • hybrid 1Gbps connections
  • Not tested native Ehthernet
  • Basic Conclusions
  • Transport Throughputs
  • UDP all modalities are very similar
  • TCP SONET is slightly better for multiple
    streams 6
  • Jitter Measurements
  • Ping, tcpmon, tcpcliser all modalities are
    comparable
  • Differences are minor and likely only to effect
    finer control applications
  • Measurements On Hybrid Dedicated Bandwidth
    Connections, Nagi Rao, Bill Wing Oak Ridge
    National Laboratory, Qishi Wu University of
    Memphis, Nasir Ghani Tennessee Technological
    University, Tom Lehman Information Science
    Institute East, Chin Guok, Eli Dart ESnet,
    raons_at_ornl.gov. May 11, 2007, INFOCOM2007
    Workshop on High Speed Networks, Anchorage, Alaska

36
References
  • High Performance Network Planning Workshop,
    August 2002
  • http//www.doecollaboratory.org/meetings/hpnpw
  • Science Case Studies Update, 2006 (contact
    eli_at_es.net)
  • DOE Science Networking Roadmap Meeting, June 2003
  • http//www.es.net/hypertext/welcome/pr/Roadmap/ind
    ex.html
  • DOE Workshop on Ultra High-Speed Transport
    Protocols and Network Provisioning for
    Large-Scale Science Applications, April 2003
  • http//www.csm.ornl.gov/ghpn/wk2003
  • Science Case for Large Scale Simulation, June
    2003
  • http//www.pnl.gov/scales/
  • Workshop on the Road Map for the Revitalization
    of High End Computing, June 2003
  • http//www.cra.org/Activities/workshops/nitrd
  • http//www.sc.doe.gov/ascr/20040510_hecrtf.pdf
    (public report)
  • ASCR Strategic Planning Workshop, July 2003
  • http//www.fp-mcs.anl.gov/ascr-july03spw
  • Planning Workshops-Office of Science
    Data-Management Strategy, March May 2004
  • http//www-conf.slac.stanford.edu/dmw2004
  • For more information contact Chin Guok
    (chin_at_es.net). Also see
  • http//www.es.net/oscars
  • LHC/CMS
Write a Comment
User Comments (0)
About PowerShow.com