Title: High Performance Networking for Colleges and Universities: From the Last Kilometer to a Global Terab
1High Performance Networking for Colleges and
Universities From the Last Kilometer to a Global
Terabit Research Network
- TERENA Networking Conference 2001
- Antalya, Turkey
- Michael A. McRobbie PhD
- Vice President for Information Technology
- and Chief Information Officer
- Steven Wallace
- Chief Technologist and Director
- Advanced Network Management Laboratory
- Indiana Pervasive Computing Research Initiative
2Network-Enabled Science and Research in the 21st
Century
- Science and research is becoming progressively
more global with network-enabled world wide
collaborative communities rapidly forming in a
broad range of areas - Many are based around a few expensive sometimes
unique instruments or distributed complexes of
sensors that produce vast amounts of data - These global communities will carry out research
based on this data
3Network-Enabled Science and Research in the 21st
Century
- This data will be
- collected via geographically distributed
instruments - analyzed by supercomputers and large computer
clusters - visualized with advanced 3-D display technology
and - stored in massive or large data storage systems
- All of this will be distributed globally
4Examples of Network-Enabled Science
- NSF funded Grid Physics Networks (GriPhyN) need
for petascale virtual data grids (i.e. capable of
analyzing petabyte datasets) - Compact Muon Selenoid (CMS) and A Toroidal LHC
Apparatus (ATLAS) experiments using the Large
Hadron Collider (LHC) located at (CERN) gt 2.5
Gb/s - Laser Interferometer Gravitational Wave
Observatory (LIGO) 200GB-5TB data sets needing
2.5 Gb/s or greater for reasonable transfer
times - Atacama Large Millimeter Array (ALMA)
- Collaborative video (e.g. HDTV) 20Mb/s
- Sloan Digital Sky Survey (SDSS) gt 1Gb/s
5A Vision for 21st Century Network-Enabled Science
and Research
- The vision is for this global infrastructure and
data to be integrated into Grids seamless
global collaborative environments tailored to the
specific needs of individual scientific
communities
6Components of Global Grids
- High performance networks are fundamental to
integrating Global Grids together - There are, very broadly speaking, three
components to Global Grids - Campus Networks (the last kilometer)
- National and Regional Research and Education
Networks (NRRENs) - Global connections between NRRENs
7Impediments to Global Grids
- Of these three components, on a world-wide scale,
investment and engineering is only adequate for
NRRENs - Campus networks rarely provide scalable bandwidth
to the desktop commensurate with speeds of campus
connections to NRRENs - Global connections between NRRENs are major
bottlenecks they are very slow compared to
NRREN backbone speeds
8Building Global Grids
- To build a true Global Grid requires
- Scalable campus networks providing ubiquitous
high bandwidth connections to every desktop
commensurate with campus connections to NRRENs - Global connectivity between NRRENs of comparable
speeds to the NRREN backbones, which is also
stable, persistent and of production quality like
the NRRENs themselves
9Presentation Overview
- This talk describes
- Some NRRENs and their common characteristics
- Grid Ready Campus Networks with Indiana
Universitys network architecture and management
of IT as an example - A solution to the global connectivity problem
that scales to a terabit global research network
101. NRRENs
- Abilene
- OC48 -gtOC192
- OC48 connected GigaPoPs (moving to min. OC12)
- ITN provider
- 1 Gb/s sustained data rates seen
- CAnet3
- US Fed nets (e.g. ESnet)
- DANTE -gt GEANT
- APAN
- CERNET
11(No Transcript)
12(No Transcript)
13(No Transcript)
14(No Transcript)
15(No Transcript)
16(No Transcript)
17NRRENs
- OC48 (2.4 Gb/s) backbone implemented today
- Moving to OC192 (9.6 Gb/s) as next evolution
- Institutions access backbone at OC12 or greater
(a few connections at OC48) - Native high-speed IPv4
- Support for IPv6 (but at much lower performance
due to router constraints)
18NRRENs
- Advanced Services
- Typically run as open (visible) networks, not a
commercial service - IP multicast deployed in most backbones, but
still not as production as unicast but not
reliable internationally - QoS mixed results, still in its infancy. Very
little going across more than one network - Intra-regional interconnect speeds range from OC3
to OC12. Soon to be OC48 in some cases
192.1 Campus Networks the Critical Last Kilometer
- Grid applications will require guaranteed
multi-Mbps bandwidth per application end to
end" - That is, these speeds must be sustained from the
desktop, through the various levels of the campus
network to the NRREN and then to the application
target - Thus campus networks must be architected to
provide scalable levels of connectivity to NRRENs
as their speeds increase - It makes no sense to have a shared 10Mbps desktop
connection (delivering at most about 1 Mbps) into
a 10,000 Mbps (10Gbps) NRREN!! - Providing appropriate levels of connectivity to
the desktop and scalable campus network
architectures to support them is a top priority
in IT strategic planning at US Universities - A sizable portion of telecommunications budgets
can go to this (e.g. 25M at Indiana
University in 00/01)
20Grid-Ready Campus Network Architectures
- The three main levels of campus network
architectures are - Desktop/intrabuilding (the last 100 meters)
- Interbuilding (connecting groups of buildings)
- Backbone (connecting those groups)
- Each level must provide progressively more
capacity and the whole architecture must be
scalable - Campus networks commonly have two external
connections - Commercial Internet
- Regional gigaPoPs (to NRRENs)
21Campus Networking (last 100 meters)
- 10Mb/s switched to the desktop
- Adequate for VHS-quality video distribution
- Video conferencing
- Digital library such as CD quality music
- Gigabyte datasets transferred in 15 minutes
- 100Mb/s switched to the desktop
- Cost of 10/100Mb switch ports less than 50USD
- Gigabyte datasets transferred in 2 minutes
- Suitable for HDTV distribution
- 1000Mb/s switched to the desktop now practical
over Cat5 copper - Cost of 100/1000Mb switch ports less than 500USD
- Terabyte datasets transferred in 2.5 hours
- Fiber to the desktop probably reserved for 10Gb/s
and beyond but necessary between buildings
22Grid Ready Campus Network Enablers
- Gigabit wire-speed ASIC-based routers
- currently providing 1 Gb/s uplinks, next
generation will support 10 Gb/s uplinks - QoS support in hardware
- Commodity priced Ethernet switches that support
10/100 Mb/s and 1 Gb/s connections
232.2 Indiana Universitys Grid Ready Campus Network
- Switched 10Mbps standard for all 55,000 desktops
- Switched 100Mbps available on request
- 1Gbps available in selected cases
- OC12 (650 Mb/s) Internet2 connectivity
- Native support for IP multicast in both the layer
2 Ethernet switches and the layer 3 routers - Support for DiffServ based quality of service
242.3 Managing IT in US Higher Education
- In the US IT is recognized as being
- of central importance in higher education
- fundamental to teaching, learning research
- essential to responsible and accountable
institutional management - a source of institutional competitive advantage
- It is also a major source of expenditure in US
universities (fully costed) between 5 10 of
an institutions total budget
25Responsibility for Managing IT
- US Universities have elevated IT to a portfolio
of central importance reporting directly to the
president or chief academic officer - This portfolio tends to be the responsibility of
the chief information officer (CIO) - University CIOs are typically responsible for
central IT and support of distributed IT (e.g. in
departments, schools, faculties) - This parallels earlier developments in US
business
26Strategic Planning for IT
- Given the vital importance of IT, US universities
have developed IT strategic plans - These plans guide the institutions future
development and investment in IT - They are also used to leverage considerable
additional public and private funding for
university IT infrastructure
272.4 An Example Indiana University
- Founded in 1820
- State public university with
- 1.9B budget (99-00) with 27 from the State of
Indiana - 7 campuses State-wide (two largest and research
intensive campuses in Bloomington and
Indianapolis) - 97,150 students
- 4,276 faculty
- 9,844 appointed staff
- 42,000 course sections
- 1B endowment
28IT at Indiana University
- CIO position created in 96 reports directly to
IU President - Responsible for central IT on all campuses
- Central IT budget from all sources 100M
- 1,200 staff
- Departments, schools faculties expend about a
further 50M - Central IT comprises
- telecommunications (e.g. data, voice, video on
campus, intra interstate, internationally)
represents 35 of the budget - research academic computing (e.g.
supercomputing, massive data storage, large-scale
VR) represents 13 of the budget - teaching learning technologies (e.g. user
support/education, desktop life-cycle funding,
classroom IT, enterprise software licensing,
student labs, Web support) represents 28 of the
budget - administrative computing (enterprise information
systems e.g. student, financial, HR
library systems, enterprise
databases storage) represents 24 of the budget
29Strategic Planning for IT at Indiana University
- Goal of Indiana University to be a leader in the
use and application of information technology. - CIO responsible for developing IT Strategic Plan
to achieve this goal - first University-wide IT Strategic Plan
- used IT Committee system 200 people involved in
preparation - prepared December 97 to May 98, then discussed
University-wide approved by President and Board
of Trustees, December 98 - CIO responsible for implementation
- 5 year plan consisting of 10 major
recommendations 68 actions (http//www.indiana.ed
u/ovpit/strategic/) - Implementation Plan and full costings developed
in parallel - Full cost 210M over 5 years 120M in new
funding from the State, 90M in re-programmed
University funding
303.1 Towards a Global Terabit Research Network
(GTRN)
- Global network-enabled collaborative Grids
require true high-speed global research and
education network that - is of production quality (managed as production
service, redundant, stable, secure) - is persistent (is based on a long-term agreement
with carrier(s) and others) - provides a uniform form of connection globally
through global network access points (GNAPs) - provides interconnect speeds comparable to NRRENS
backbone speeds (presently OC48 going to OC192) - scales to a terabit per second data rate during
this decade
31Impediments to Building Global Grids
- International connections very slow compared with
NRREN backbone speeds - Long term funding uncertain (e.g. NSF HPIIS
program) - Global connection effort not well-coordinated
- Overly reliant on transit through US
infrastructure - Frequently connections are via ATM or IP clouds,
making management of advanced services difficult - Poor coordination of advanced service deployment
- Extreme difficulty ensuring reasonable end-to-end
performance
32Connectivity to US Transit Infrastructure
Asia Pacific
Europe
Americas
33STAR TAP and International Transit Service (ITN)
- STAR TAP, CAnet3 and Abilene provide some level
of International transit across North America - Abilene offers convenient international transit
at multiple landing sites, however transit not
offered to other NRRENs (e.g. ESnet) - STAR TAP requires a connection to AADS best
effort ATM service (reducing the ability to
deploy QoS)
34Indiana University
- Firsthand experience with these difficulties
through its Global NOC - http//globalnoc.iu.edu/
35(No Transcript)
363.2 Towards a GTRN
- A single global backbone interconnecting global
network access points (GNAPs) that provide
peering within a country or region - Global backbone speeds comparable to those at
NRRENS, i.e. OC192 in 2002 - Based on stable carrier infrastructure
- Persistent based on long-term (5-10 year)
agreements with carriers, router vendors and
optical transmission equipment vendors
37Towards a GTRN
- Scalable e.g. OC768 by 2004, multiple
wavelengths running striped OC768s by 2005,
terabit/sec transmission by 2006 - GNAPs connect at OC48 and above. To scale up as
backbone speeds scale up - Production service with 24x7x365 management
through a global NOC - Coordinated global advanced service deployment
(e.g. QoS, IPv6, multicast)
38(No Transcript)
39(No Transcript)
40(No Transcript)
41(No Transcript)
42(No Transcript)