Internet2: A Tutorial Part 3 of 4 - PowerPoint PPT Presentation

Loading...

PPT – Internet2: A Tutorial Part 3 of 4 PowerPoint presentation | free to download - id: 5760f-MDViN



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Internet2: A Tutorial Part 3 of 4

Description:

Explore the r le that IPv6 will play in the Internet2 project ... IBM Research: Narwhal Resolution Proxy. http://dsi.internet2.edu/apps99.html ... – PowerPoint PPT presentation

Number of Views:107
Avg rating:3.0/5.0
Slides: 61
Provided by: epaul
Learn more at: http://www.internet2.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Internet2: A Tutorial Part 3 of 4


1
Internet2 A TutorialPart 3 of 4
  • 17th Brazilian Symposium
  • on
  • Computer Networks
  • Paul Love, Internet2
  • Chair, I2 Topology WG
  • epl_at_internet2.edu

2
Working Groups

3
Working Groups
  • IPv6
  • Measurement
  • Multicast
  • Network Management
  • Network Storage
  • Quality of Service
  • Routing
  • Security
  • Topology

4
IPv6
  • Chair Dale Finkelson,Univ Nebraska, Lincoln
  • Focus
  • Explore the rôle that IPv6 will play in the
    Internet2 project
  • Work with those interested in IPv6 to build IPv6
    testbeds across the Internet2 structure,
    including vBNS and Abilene
  • Must be coordinated across backbones, gigaPoPs,
    and campuses
  • Must be interoperable among above and between
    vendors

5
Measurement
  • Chairs David Wasley, Univ California and Matt
    Zekauskas, Internet2 staff
  • Focus
  • Places to measure
  • At campuses, at gigaPoPs, within interconnect(s)
  • Things to measure
  • Traffic utilization
  • Performance delay and packet loss
  • Traffic characterization

6
Multicast
  • Chair Kevin Almeroth,Univ California at Santa
    Barbara
  • Focus Make native IP multicast scalable and
    operationally effective
  • Must be coordinated across backbones, gigaPoPs,
    and campuses
  • Must be coordinated with unicast routing

7
Network Management
  • Chair Mark Johnson,North Carolina Networking
    Initiative
  • Focus
  • Common trouble ticket system
  • How can all our interconnects and gigaPoPs and
    universities appear to be a seamless whole?

8
Network Storage
  • Chair Micah Beck, Univ of Tennessee, Knoxville
  • Focus Develop and deploy a reliable, scalable,
    high performance network storage capability
    enabling broad access to stored video, very large
    data sets, etc.

9
Quality of Service
  • Chair Ben Teitelbaum, Internet2 staff
  • Focus Multi-network IP-based QoS
  • Relevant to advanced applications
  • Interoperability carriers and kit
  • Architecture
  • Qbone distributed testbed

10
The QoS Big Problems
  • Understanding Application Requirements
  • Scalability
  • Interoperability

11
Routing
  • Chair Steve Corbato, Univ Washington
  • Focus Internal External routing
  • Critical issues
  • gigaPoP internal routing design
  • Explicit routing requirement (the fish problem)
  • gigaPoP external routing recommendations
  • Subscribers (Internet2 campuses)
  • National interconnects (vBNS, Abilene, and NGI
    networks)

12
Nature of Explicit Routing
  • Fish problem
  • C1 routes via NSP1 and C2 routes via NSP2
  • One potential solution - MPLS

13
Security
  • Chair Peter Berger, Carniege Mellon Univ
  • Focus
  • Authentication
  • Application to QoS
  • Application to Digital Libraries

14
Topology
  • Chair Paul Love, Internet2 staff
  • Focus Topology of Internet2
  • Internal Internet2 connections
  • Between I2 backbones
  • Internet2 with other Advanced Research Networks
  • NGI
  • International RE

15
Working Group Summary
  • Internet2s WGs focused on projects needs
  • Complement IETF WGs
  • Membership by invitation of the chair

16
IPv6

17
Internet2 Abilene IPv6 Networking
  • with thanks to
  • Dale Finkelson, Univ of Nebraska, Lincoln

18
Project Goals
  • Deploying an IPv6 testbed
  • Both in the vBNS and Abilene
  • Understanding what IPv6 can contribute to the
    research agenda of the Internet 2 project.

19
Abilene IPv6 Description
  • IP over Sonet backbone
  • This effectively blocks deploying IPv6 in Native
    Mode within the backbone until
  • Code becomes available for Cisco12000
  • It is stable
  • It doesnt block multicast QoS
  • IPv6 will be tunneled through Abilene

20
Equipment and Protocols
  • The initial deployment will be with routers
    donated by Bay Networks
  • Routing will be done with BGP4
  • Some gigapops will implement tunnel servers for
    local connectivity
  • Gigapops with ATM connectivity will be open to
    native IPv6 connections, others will use tunnels
  • Details still TBD

21
Peering arrangements
  • The IPv6 version of Abilene will peer with the
    vBNS at two or more points
  • MREN (Chicago switch)
  • NCNE (Pittsburgh gigapop)
  • AbileneV6 will peer with other providers at the
    6TAP (Chicago switch)
  • ESnet
  • CAnet3
  • European networks
  • AbileneV6 will be available at both of the NGIXs
    (3rd still TBD)

22
Schedule
  • vBNS network was in place by the end of June 98
  • Backbone deployment of IPv6 routers in Abilene in
    the summer of 1999
  • By the end of summer
  • Initial connectivity to gigapops
  • Connectivity to other IPv6 networks

23
Working Group Agenda
  • Preparing Good Practices document for gigapop
    operators.
  • Addressing options
  • Configuration samples
  • Working with the Abilene engineering staff to
    implement the IPv6 network
  • Design an addressing plan for Abilene

24
Gigapop Issues
  • Obtaining Addresses
  • Multi-homing Hosts
  • This is specifically a problem for multihomed
    gigapops
  • Providing DNS services for IPv6
  • Providing either Native IPv6 or tunnels to the
    backbones
  • Providing IPv6 connectivity to their customers

25
Addressing Questions
  • Who gets PTLAs.
  • Abilene, vBNS, gigapops?
  • How do campus address relate to the TLAs?
  • Can you do multiple addresses within a v6 host?
  • For multiply attached gigapops
  • Do you draw NLAs from each provider?
  • Do you do private addressing at the campus
  • Some sort of translation at the edge?

26
Possible Abilene IPv6Backbone Peering Points
Seattle
STAR TAP NGIX
New York
Cleveland
Sacramento
Indianapolis
Denver
NGIX
NGIX
Kansas City
Los Angeles
Atlanta
v6 Peering Point
Houston
nb v6 will be in v4 tunnels inside Abilene
27
Pointers
  • General Information sites
  • WWW.6ren.net
  • www.ipv6.org
  • www.6bone.net
  • Site for implementations
  • All of the above sites have links to sites where
    implementation information can be found

28
Pointers
  • IETF Documentation
  • www.6bone.net has a link to IETF information
  • draft-iab-nat-implications-04.txt
  • draft-carpenter-transparency-01.txt
  • The Case for IPv6
  • draft-ietf-iab-case-for-ipv6-04.txt

29
Network Storage

30
Internet2 Distributed Storage Infrastructure
Update
  • with thanks to
  • Micah Beck Univ. of Tennessee, Knoxville
  • Bert Dempsey Univ. of North Carolina, Chapel
    Hill
  • http//dsi.internet2.edu

31
I2-DSI Participants
  • UT Knoxville
  • Micah Beck
  • Terry Moore
  • Martin Swany
  • Judi Talley
  • UNC Chapel Hill
  • Bert Dempsey
  • Paul Jones (MetaLab)
  • Debra Weiss
  • Zhiwei Xiao
  • GigaPOP and Campus Site Managers
  • UCAID/Internet2
  • Network Storage Working Group
  • Ted HanssApplications Director
  • NC Networking Initiative
  • Digital Library Federation

32
A Word From the Sponsors
  • Cisco DNS redirection
  • Ellemtel engineering effort
  • IBM large storage DCE servers
  • Novell storage directory servers
  • Starburst reliable multicast software
  • StorageTek large storage servers
  • Sun design collaboration

33
Single Server Model
  • High performance locally
  • Unacceptable performance across commodity backbone

34
Relying on Wide Area QoS
  • High performance access with reserved bandwidth
  • Essential for real-time communication
  • Technically difficult, expensive, not generally
    available

35
I2-DSI Model Replicated Services
  • Clients access nearby server
  • Everyone gets performance
  • Local resources implement a global service

36
I2-DSI Service Architecture
  • Replication
  • Rsynch, Omnicast, AFS/DFSNovell Replication
  • Resolution
  • Sonar DNS, Distributed Director
  • Delegation
  • Cache prefetch

general users
37
Internet Content Channels
  • A channel is a collection of content which can be
    transparently delivered to end user communities
    at a chosen (price,performance) point through a
    flexible, policy-based application of resources

38
Server Channel Examples
  • Replicated Web Servers
  • APIs Standard HTML, Active Server Pages
  • Channels Web sites
  • Streaming Media
  • APIs MPEG-2, proprietary file formats
  • Channels collections of multimedia presentations
  • Executable content
  • APIs Java byte code, Tcl, Perl
  • Channels CGI programs

39
Current Server Deployment
40
IBM Web Cache Manager
RS/6000 AIX Server 1 GB RAM 72 GB Disk / 900 GB
Tape ADSM Hierarchical Storage Mgt.
41
Infrastructure Expansion
  • StorageTek
  • 2 PC/Linux Servers
  • 700GB disk, tape backup
  • Novell
  • 6 PC/NetWare Servers
  • 100GB disk
  • Smaller institutions or departments

42
I2-DSI Applications Workshop Chapel Hill, NC
March 4 5, 1999
  • 4 technologies
  • Minnesota Scalable Video
  • IBM Research Multicast, Filter and Store
  • Moscow Ctr. for New Info. Tech. in Med. Ed.
    Semantic Text Analysis
  • IBM Research Narwhal Resolution Proxy
  • http//dsi.internet2.edu/apps99.html
  • Special issue of the Journal of Network and
    Computer Applications (Academic Press)

43
Application Strategy
  • Chose initial applications
  • Available or easily ported services
  • Low update demands
  • Port to an I2-DSI server
  • Our development effort is limited
  • App developers can have access to the servers
  • Distribute to homogeneous core
  • Derive service abstractions

44
I2-DSI Applications
  • Digital libraries
  • Video
  • Digitized originals
  • Large data sets
  • Medical imaging
  • CERN instruments
  • Satellite images GIS
  • Technical Archives
  • Netlib/NHSRScientific software
  • Red Hat LinuxSource code
  • ViagenieNet. Eng. Documents

45
Replication Performance and Scalability Issues
  • Server placement
  • Server resources
  • Server description (metadata)
  • Server Channel description (metadata)
  • Object representation
  • Characterization of replication mechanisms
  • Channel-to-server mapping (subscription)

46
NetStore 99 Workshop
  • Network Storage Technical Workshop
  • Knoxville, TN, October 1999
  • http//dsi.internet2.edu/netstore99
  • Scope
  • I2-DSI implementation
  • I2-DSI applications
  • Related networking projects
  • Storage technology

47
Conclusions
  • A server platform is in place
  • Infrastructure development
  • Service abstractions (search, computation)
  • Publication and replication protocols
  • Portable representation and API
  • Heterogeneous servers
  • Six months to show results from initial
    application development efforts

48
Multicast

49
Multicast Update
  • with thanks to
  • Kevin Almeroth, Univ of California, Santa Barbara
  • http//www.internet2.edu/multicast/

50
1999 A key year for multicast
  • In the past, multicast has meant MBone
  • Core set of committed users and engineers
  • Legacy non-scalable approaches to routing
  • Our hope for 1999
  • Needed, new protocols deployed
  • Enable scalable use of high-speed multicast flows
    throughout the Internet2 structure

51
Inter-Domain Guidelines
  • All backbones will use MBGP/MSDP/PIM-SM
  • MBGP exchange multicast routing information.
  • MSDP connect SM clouds (source advertising).
  • PIM-SM shared tree routing protocol.
  • Join/graft only deliver traffic on links with
    active sources.
  • Backbones actively discussing/deploying multicast
    peering
  • Abilene, vBNS, NREN, DREN, CANet2/3, ESnet,
    NORDUnet, and SURFnet.

52
Latest Status
  • Abilene tested multicast code stable code
    version found.
  • NREN has successfully deployed PIM-SM.
  • MSDP peering with vBNS and MIX at NASA-Ames.
  • Recently switched to PIM-SM (for load reasons).
  • vBNS has has also had success
  • Switched to PIM-SM recently.
  • MSDP peering with NREN, Merit (others soon).
  • MBGP w/ 8 groups translation w/ 20 others.

53
Moving in the Right Direction
  • Doing native multicast is the right way to move
    forward.
  • We are approaching the problem top down.
  • Need to continue this effort into the Gigapops
    and to member institutions.
  • Economies-of-scale, in terms of manual
    intervention, are significant.
  • What does all this mean...

54
Requirements for Multicast
  • Raise the bar for Internet2.
  • No tunnels fully deploy native multicast.
  • Peering must be done with MBGP( MSDP).
  • Institutions who BGP peer should also MBGP peer.
  • Caveats
  • If no BGP peering (default), then the same for
    multicast.
  • If congruent unicast/multicast topology, MBGP
    translate service may be available.
  • Not a complete prohibition of tunnels, but
  • Be careful about protecting low-capacity
    interfaces.
  • Dont create routing loops.

55
The Challenges
  • Where things break
  • Multicast in multi-homed environments when
  • Switch-over of unicast to I2 but multicast is
    still via some other network AND connection is
    via PIM.
  • RPF failures.
  • Things are better when
  • Multicast is a true I2 service and
    unicast/multicast topologies are congruent or,
  • Network uses MBGP.

56
The Challenges, Part II
  • Top two layers are key.
  • Need vendor support for inter-domain multicast
    protocols.
  • Vendor support is coming.
  • Need network operators to be aggressive.
  • Several have set the standard

57
Solution Two Action Items
  • Communicate with upstream service provider about
    how multicast will be delivered.
  • High confidence in backbones.
  • Abilene NOC (and WG) is educating Gigapops (and
    members) about how to handle multicast.
  • Members should be prepared to run MBGP/MSDP.
  • Pressure vendors to deploy these protocols.
  • Many vendors have time tables for releases.
  • Can deploy/co-locate multicast in parallel to
    unicast.

58
Open Issues
  • How does the I2 Multicast Working Group assist in
    deployment of multicast from the backbones all
    the way to member institutions?
  • Use the I2 multicast mailing list (subscribe by
    mailing listserv_at_internet2.edu - place in the
    body subscribe wg-multicast
  • Collect experience and create guidelines.
  • Protecting low-capacity multicast environments
    from high-capacity groups.
  • Replace dense-mode protocols with sparse mode.
  • Set administrative boundaries.

59
Virtual Clinic Network Diagram
ATM downlink Switch
DS3
DS3
Satellite Modem
Client PC Host
NREN Navajo Router
Navajo Nation Downlink
GRC Uplink
SBS-5
Satellite Modem
ESnet ATM Switch
Chicago vBNS Router
DS3
GRC Uplink ATM Switch
NREN NAS Router
Client PC Host
Client PC Host
Client PC Host
CALREN 2 UCB vBNS Router
OC3
Uplink GRC Router
NREN/NAS Fast Ethernet Switches
UCSC Router
GRC NREN ATM Switch
ARC NREN ATM Switch
Client PC Host
w2w PC Server
NGIX ATM Switch
NREN GRC Router
SGI Server
100BaseTX Hub
NREN Stanford Router
Client PC Host
Abilene Berkley Router
Client PC Host
With thanks to Mark Foster, NASA Ames
60
The End
About PowerShow.com