Internet2 Abilene Network and Next Generation Optical Networking - PowerPoint PPT Presentation

Loading...

PPT – Internet2 Abilene Network and Next Generation Optical Networking PowerPoint presentation | free to download - id: eb5a0-NmZlY



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Internet2 Abilene Network and Next Generation Optical Networking

Description:

G ANT 5 Gbps between Europe and New York City now ... Local data collection to capture data at times of network instability. Enhance active probing ... – PowerPoint PPT presentation

Number of Views:128
Avg rating:3.0/5.0
Slides: 41
Provided by: greg432
Learn more at: http://www.internet2.org
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Internet2 Abilene Network and Next Generation Optical Networking


1
Internet2 Abilene Network and Next Generation
Optical Networking
  • Steve Corbató
  • Director, Backbone Network Infrastructure
  • Access NovaForum
  • May 28, 2002

2
This presentation
  • Abilene Network today
  • Optical networking evolution
  • Next generation of Abilene
  • Future national optical initiatives

3
Networking hierarchy
  • Internet2 networking is a fundamentally
    hierarchical and collaborative activity
  • International networking
  • Ad hoc ? Global Terabit Research Network (GTRN)
  • National backbones
  • Regional networks
  • GigaPoPs ? advanced regional networks
  • Campus networks
  • Much activity now at the metropolitan and
    regional scales

4
Abilene focus
  • Goals
  • Enabling innovative applications and advanced
    services not possible over the commercial
    Internet
  • Backbone regional infrastructure provides a
    vital substrate for the continuing culture of
    Internet advancement in the university/corporate
    research sector
  • Advanced service efforts
  • Multicast
  • IPv6
  • QoS
  • Measurement
  • an open, collaborative approach
  • Security

5
Partnership approach
  • The Abilene Network is a UCAID project done in
    partnership with
  • Cisco Systems (routers, switches, and access)
  • Juniper Networks (routers)
  • Nortel Networks (SONET kit)
  • Qwest Communications (SONET DWDM circuits)
  • Indiana University (network operations center)
  • Internet2 Test Evaluation Centers (ITECs)
  • North Carolina and Ohio

6
(No Transcript)
7
Abilene May, 2002
  • IP-over-SONET backbone (OC-48c, 2.5 Gbps) 53
    direct connections
  • 4 OC-48c connections
  • 1 Gigabit Ethernet trial
  • 23 will connect via at least OC-12c (622 Mbps) by
    1Q02
  • Number of ATM connections decreasing
  • 215 participants research universities labs
  • All 50 states, District of Columbia, Puerto
    Rico
  • 15 regional GigaPoPs support 70 of participants
  • Expanded access
  • 50 sponsored participants
  • New Smithsonian Institution, Arecibo Radio
    Telescope
  • 23 state education networks (SEGPs)

8
Abilene international connectivity
  • Transoceanic RE bandwidths growing!
  • GÉANT 5 Gbps between Europe and New York City
    now
  • Key international exchange points facilitated by
    Internet2 membership and the U.S. scientific
    community
  • STARTAP STAR LIGHT Chicago (GigE)
  • AMPATH Miami (OC-3c ? OC-12c)
  • Pacific Wave Seattle (GigE)
  • MAN LAN - New York City (GigE/10GigE EP soon)
  • CANET3/4 Seattle, Chicago, and New York
  • CUDI CENIC and Univ. of Texas at El Paso
  • International transit service
  • Collaboration with CANET3 and STARTAP

9
Abilene international connectivity model
  • Abilene is a GTRN partner
  • Already peering with GTRN routers in New York
    City and Seattle
  • Peering at major intl EPs in U.S. encouraged
  • Chicago Star Light (migration from STAR TAP)
  • Seattle Pacific Wave
  • Miami AMPATH
  • New York City Manhattan Landing (MAN LAN) in
    progress
  • Los Angeles (soon?)
  • Direct BGP peering preferred
  • via Layer-2 EP media or direct connection
  • ATM support generally ends by Sept 2003
  • No new ATM peers

10
09 March 2002
Abilene International Peering
STAR TAP/Star Light APAN/TransPAC, Canet3, CERN,
CERnet, FASTnet, GEMnet, IUCC, KOREN/KREONET2,
NORDUnet, RNP2, SURFnet, SingAREN, TAnet2
Pacific Wave AARNET, APAN/TransPAC, CAnet3,
TANET2
NYCM BELNET, CAnet3, GEANT, HEANET, JANET,
NORDUnet
SNVA GEMNET, SINET, SingAREN, WIDE
LOSA UNINET
OC3-OC12
AMPATH REUNA, RNP2 RETINA, ANSP, (CRNet)
San Diego (CALREN2) CUDI
El Paso (UACJ-UT El Paso) CUDI
ARNES, CARNET, CESnet, DFN, GRNET, RENATER,
RESTENA, SWITCH, HUNGARNET, GARR-B, POL-34, RCST,
RedIRIS
11
Packetized raw High Definition Television (HDTV)
  • Raw HDTV/IP single UDP flow of 1.5 Gbps
  • Project of USC/ISIe, Tektronix, U. of Wash
    (DARPA)
  • 6 Jan 2002 Seattle to Washington DC via Abilene
  • Single flow utilized 60 of backbone bandwidth
  • 18 hours no packets lost, 15 resequencing
    episodes
  • End-to-end network performance (includes P/NW
    MAX GigaPoPs)
  • Loss lt0.8 ppb (90 c.l.)
  • Reordering 5 ppb
  • Transcontinental 1-Gbps TCP
  • requires loss of
  • lt30 ppb (1.5 KB frames)
  • lt1 ppm (9KB jumbo)

12
End-to-End Performance High bandwidth is not
enough
  • Bulk TCP flows (gt 10 Mbytes transfer)
  • Current median flow rate over Abilene 1.9 Mbps

13
True End-to-End Performance requires a system
approach
  • User perception
  • Application
  • Operating system
  • Host IP stack
  • Host network card
  • Local Area Network
  • Campus backbone network
  • Campus link to regional network/GigaPoP
  • GigaPoP link to Internet2 national backbones
  • International connections

EYEBALL APPLICATION STACK JACK NETWORK . . . . .
. . . . . . .
14
Jumbo frames supported
  • Default Abilene backbone MTU has been increased
    from 4.5 to 9 kB
  • We now can support 9 kB MTUs on a per connector
    basis
  • Motivation support for HPC computing and large
    TCP flows

15
Abilene traffic characterization information
  • Weekly detailed reports
  • http//netflow.internet2.edu/weekly/
  • General analysis
  • http//www.itec.oar.net/abilene-netflow/

16
Optical networking technology drivers
  • Aggressive period of fiber construction on the
    national metro scales in U.S.
  • Now rapid industry contraction and capital crisis
  • Many university campuses and regional GigaPoPs
    already use dark fiber
  • Dense Wave Division Multiplexing (DWDM)
  • Allows the provisioning of multiple channels
    (?s) over distinct wavelengths on the same fiber
    pair
  • Fiber pair can carry 160 channels (1.6 Tbps!)
  • Optical transport is the current focus
  • Optical switching is still in the realm of
    experimental networks, but may be nearing
    practical application

17
DWDM technology primer
  • DWDM fundamentally is an analog optical
    technology
  • Combines multiple channels (2-160 in number)
    over the same fiber pair
  • Uses slightly displaced wavelengths (?s) of
    light
  • Generally supports 2.5 or 10 Gbps channels
  • Physical obstacles to long-distance transmission
    of light
  • Attenuation
  • Solved by amplification (OO)
  • Wavelength dispersion
  • Requires periodic signal regeneration an
    electronic process (OEO)

18
DWDM system components
  • Base fiber pair ( right of way conduit)
  • Multiplexing/demultiplexing terminals
  • OEO equipment at each end of light path
  • Output SONET or Ethernet (10G/1G) framing
  • Amplifiers
  • All optical (OO)
  • 100 km spacing
  • Regeneration
  • Electrical (OEO) process costly (50 of
    capital)
  • 500 km spacing (with Long Haul - LH - DWDM)
  • New technologies (ELH/ULH) can extend this
    distance
  • Remote huts, operations maintenance

19
Telephonys recent past (from an IP perspective
in the U.S.)
20
IP Networking (and telephony) in the not so
distant future
21
National optical networking options
  • 1 Provision incremental wavelengths
  • Obtain 10-Gbps ?s as with SONET
  • Exploit smaller incremental cost of additional
    ?s
  • 1st ? costs 10x than subsequent ?s
  • 2 Build dim fiber facility
  • Partner with a facilities-based provider
  • Acquire 1-2 fiber pairs on a national scale
  • Outsource operation of inter-city transmission
    equipment
  • Needs lower-cost optical transmission equipment
  • The classic buy vs. build decision in
    Information Technology

22
Future of Abilene
  • Original UCAID/Qwest agreement amended on October
    1, 2001
  • Extension of MoU for another 5 years until
    October, 2006
  • Originally expired March, 2003
  • Upgrade of Abilene backbone to optical transport
    capability - ?s (unprotected)
  • x4 increase in the core backbone bandwidth
  • OC-48c SONET (2.5 Gbps) to 10-Gbps DWDM

23
Key aspects of next generation Abilene backbone -
I
  • Native IPv6
  • Motivations
  • Resolving IPv4 address exhaustion issues
  • Preservation of the original End-to-End
    Architecture model
  • p2p collaboration tools, reverse trend to
    CO-centrism
  • International collaboration
  • Router and host OS capabilities
  • Run natively - concurrent with IPv4
  • Replicate multicast deployment strategy
  • Close collaboration with Internet2 IPv6 Working
    Group on regional and campus v6 rollout
  • Addressing architecture

24
Key aspects of next generation Abilene backbone -
II
  • Network resiliency
  • Abilene ?s will not be ring protected like SONET
  • Increasing use of videoconferencing/VoIP impose
    tighter restoration requirements (lt100 ms)
  • Options
  • MPLS/TE fast reroute (initially)
  • IP-based IGP fast convergence (preferable)

25
Key aspects of next generation Abilene backbone -
III
  • New differentiated measurement capabilities
  • Significant factor in NGA rack design
  • 4 dedicated servers at each nodes
  • Additional provisions for future servers
  • Local data collection to capture data at times of
    network instability
  • Enhance active probing
  • Now latency jitter, loss, reachability
    (Surveyor)
  • Regular TCP/UDP throughput tests 1 Gbps
  • Separate server for E2E performance beacon
  • Enhance passive measurement
  • Now SNMP (NOC) traffic matrix/type (Netflow)
  • Routing (BGP IGP)
  • Optical splitter taps on backbone links at select
    location(s)

26
Abilene Observatories
  • Currently a program outline for better support of
    computer science research
  • Influenced by discussions with NRLC members
  • 1) Improved accessible data archive
  • Need coherent database design
  • Unify correlate 4 separate data types
  • SNMP, active measurement data, routing, Netflow
  • 2) Provision for direct network measurement and
    experimentation
  • Resources reserved for two additional servers
  • Power (DC), rack space (2RU), router uplink ports
    (GigE)
  • Need process for identifying meritorious projects
  • Need rules of engagement (technical policy)

27
(No Transcript)
28
Next generation router selection
  • Extensive router specification and test plan
    developed
  • Team effort UCAID staff, NOC, NC and Ohio ITECs
  • Discussions with four router vendors
  • Tests focused on next gen advanced services
  • High performance TCP/IP throughput
  • High performance multicast
  • IPv6 functionality throughput
  • Classification for QoS and measurement
  • 3 router platforms tested commercial ISPs
    referenced
  • ? New Juniper T640 platform selected

29
Abilene cost recovery model
30
Abilene program changes
  • 10-Gbps (OC-192c POS) connections
  • ? backhaul available wherever needed possible
  • Only required now for 1 of 4 OC-48c connections
  • 3-year connectivity commitment required
  • Gigabit and 10-Gigabit Ethernet
  • Available when connector has dark fiber access
    into Abilene router node
  • Backhaul not available
  • ATM connection peer support
  • TAC recommended ending ATM support by fall 2003
  • Two major ATM-based GigaPoPs have migrated
  • 2 of 3 NGIXes still are ATM-based
  • NGIX-Chicago _at_ STAR LIGHT is now GigE
  • Urging phased migration for connectors peers

31
Deployment timing
  • Ongoing Backbone router procurement
  • Detailed deployment planning
  • July Rack assembly (Indiana Univ.)
  • Aug/Sep New rack deployment at all 11 nodes
  • Fall First Wave ?s commissioned
  • Fall meeting demonstration events
  • iGRID 2002 (Amsterdam) late Sep.
  • Internet2 Fall Member Meeting (Los Angeles)
    late Oct.
  • SC2002 (Baltimore) mid Nov.
  • Remaining ?s commissioned in 2003

32
Two leading national initiatives in the U.S.
  • Next Generation Abilene
  • Advanced Internet backbone
  • connects entire campus networks of the research
    universities
  • 10 Gbps nationally
  • TeraGrid
  • Distributed computing (Grid) backplane
  • connects high performance computing (HPC) machine
    rooms
  • Illinois NCSA, Argonne
  • California SDSC, Caltech
  • 4x10 Gbps Chicago ? Los Angeles
  • Ongoing collaboration between both projects

33
TeraGrid A National Infrastructure
For more information www.teragrid.org
34
TeraGrid Architecture 13.6 TF (Source C.
Catlett, ANL)
574p IA-32 Chiba City
32
32
256p HP X-Class
32
32
Argonne 64 Nodes 1 TF 0.25 TB Memory 25 TB disk
Caltech 32 Nodes 0.5 TF 0.4 TB Memory 86 TB disk
128p Origin
32
24
128p HP V2500
32
HR Display VR Facilities
24
8
5
8
5
92p IA-32
HPSS
24
HPSS
OC-12
ESnet HSCC MREN/Abilene Starlight
Extreme Black Diamond
4
OC-48
Calren
OC-48
OC-12
NTON
GbE
Juniper M160
OC-12 ATM
NCSA 500 Nodes 8 TF, 4 TB Memory 240 TB disk
SDSC 256 Nodes 4.1 TF, 2 TB Memory 225 TB disk
Juniper M40
Juniper M40
OC-12
vBNS Abilene Calren ESnet
OC-12
2
2
OC-12
OC-3
Myrinet Clos Spine
8
4
UniTree
8
HPSS
2
Sun Starcat
Myrinet Clos Spine
4
1024p IA-32 320p IA-64
1176p IBM SP Blue Horizon
16
14
64x Myrinet
4
32x Myrinet
1500p Origin
Sun E10K
32x FibreChannel
8x FibreChannel
10 GbE
Fibre Channel Switch
32 quad-processor McKinley Servers (128p _at_ 4GF,
12GB memory/server)
16 quad-processor McKinley Servers (64p _at_ 4GF,
8GB memory/server)
IA-32 nodes
Router or Switch/Router
35
Optical networking scaling factors
  • 2 TeraGrid routing nodes
  • 11 Next Generation Abilene routers
  • 53 Abilene connectors
  • 215 Abilene participants (univs labs)
  • But…
  • 30-60 DWDM access nodes in leading viable
    carriers U.S. networks

36
Regional optical fanout
  • Next generation architecture Regional state
    based optical networking projects are critical
  • Three-level hierarchy
  • backbone, GigaPoPs/ARNs, campuses
  • Leading examples
  • CENIC ONI (California), I-WIRE (Illinois),
    Indiana (I-LIGHT)
  • Collaboration with the Quilt GigaPoPs
  • Regional Optical Networking project
  • U.S. carrier DWDM access is now not nearly as
    widespread as with SONET circa 1998
  • 30-60 cities for DWDM
  • 120 cities for SONET

37
Optical network project differentiation
38
Conclusions
  • Backbone upgrade project underway
  • Partnership with Qwest extended thru 2006
  • Juniper T640 routers selected for backbone
  • 10-Gbps backbone ? deployment starts this fall
  • Incremental, non-disruptive transition
  • Advanced service foci
  • Native, high-performance IPv6
  • Enhanced, differentiated measurement
  • Network resiliency
  • NSF TeraGrid and Extended Terascale Facility
  • Complementary and collaborative relationship
  • Continue to examine prospects for a fiber optical
    networking facility National Light Rail

39
For more information
  • Web www.internet2.edu/abilene
  • E-mail abilene_at_internet2.edu

40
www.internet2.edu
About PowerShow.com