OptIPuter Infrastructure: How to Recognize, Procure, and Control - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

OptIPuter Infrastructure: How to Recognize, Procure, and Control

Description:

OptIPuter Infrastructure: How to Recognize, Procure, and Control – PowerPoint PPT presentation

Number of Views:79
Avg rating:3.0/5.0
Slides: 20
Provided by: drd101
Category:

less

Transcript and Presenter's Notes

Title: OptIPuter Infrastructure: How to Recognize, Procure, and Control


1
OptIPuter Infrastructure How to Recognize,
Procure, and Control
Tom DeFanti Phil Papadopoulos Joe Mambretti
2
A Gold Standard OptIPuter Application Cluster
  • Is a cluster with appropriate display, storage,
    and network interfaces for the application
  • Master node connected to a fast routed campus
    network connected to Internet2 (slave nodes may
    also connect to routed external RE networks)
  • Has Linux installed with Rocks
  • All nodes are connected to each other via a
    private internal cluster network
  • All nodes are connected by VLANs to the
    campus/metro/national/international LambdaGrid
  • Lambdas are provisioned and controlled by
    OptIPuter middleware

3
UCSD Campus-Scale Routed OptIPuter with Nodes for
Storage, Computation and Visualization
4
UCSD Campus-Scale Routed OptIPuter with Nodes for
Storage, Computation and Visualization
5
OMNInet Metro-Scale 10Gb Single Domain Lambda
Switched OptIPuter--Chicago
6
OptIPuter National Testbed 10GE CAVEwave Rides
the National LambdaRail
EVL
Next Step Coupling NASA Centers to NSF
OptIPuter Testbed
Source Tom DeFanti, OptIPuter co-PI
7
IRNC and Related Circuits in 2005 North America,
Europe and Japan
US IRNC (black) 20Gb NYCAmsterdam 10Gb
LATokyo GEANT/I2 (orange) 30Gb London,
etc.NYC UK to US (red) 10Gb
LondonChicago SURFnet to US (light blue) 10Gb
AmsterdamNYC 10Gb AmsterdamChicago Canadian
CAnet4 to US (white) 30Gb Chicago-Canada-NYC 3
0Gb Chicago-Canada-Seattle Japan JGN II to US
(grey) 10Gb ChicagoTokyo European (not GEANT)
(yellow) 10Gb AmsterdamCERN 10Gb
PragueAmsterdam 2.5Gb StockholmAmsterdam 10Gb
LondonAmsterdam IEEAF lambdas (dark blue) 10Gb
NYCAmsterdam 10Gb SeattleTokyo
CAVEwave/PacificWave (purple) 10Gb
ChicagoSeattleSD 10Gb SeattleLASD
Northern Light
UKLight
Japan
CERN
PNWGP
Manhattan Landing
8
Global-Scale Switched OptIPuterMulti Domain
Cluster
Cluster
University of Amsterdam
University of Illinois at Chicago
All-optical LAN
All-optical LAN
StarLight
NetherLight
OC-192
All-optical LAN
(Amsterdam)
(Chicago)
PDC
PDC
BOD/AAA
PIN
OMNInet All-optical MAN
Cluster
ODIN/GMPLS
Signaling Link
Chicago and Northwestern at Evanston
9
(No Transcript)
10
OptIPuter Optical Architecture Tasks
  • Enhanced Distributed Optical Backplane
    Architecture
  • Optical BackPlane Control and Management for
    Dynamic Lightpath Switching
  • Optical Signaling Technologies
  • Closer Integration Among Components
  • Lightpath Management Methods
  • Survivability, Reliability, Restoration
  • Support New Interfaces to Clusters, Storage
    Devices
  • AAA Policy-Driven Access
  • Preliminary Lambda Scheduler (Resource
    Reservation Manager)
  • Performance Metrics, Analysis and Protocol
    Parameters

11
CAVEWave Performance Metricians
  • Linda, Phil, Aaron, George, Pat, Alan
    (NSF/DOE/NASA)
  • Lucs iperf results
  • UDP using multiple streams, unable to go over
    900Mbps total on 10GE CAVEWave
  • TCP  using multiple streams, unable to go over
    1.2Gbps
  • None of this is easy

12
UCSD Campus-Scale Switched Lambda Testbed Future
  • Goals by 2007
  • gt 50 endpoints at 10 GigE
  • gt 32 Packet switched
  • gt 32 Switched wavelengths
  • gt 300 Connected endpoints

13
100 MegaPixel Display
  • 55-Panel display (100 Megapixel 11x5 21 LCD
    screens)
  • 30 dual Opterons (64-bit)
  • 60 TB Disk
  • 30 10GE 30 GE interfaces
  • MEMS optical switched 10GEs and routed GEs
  • Optical DVI cabling
  • Calit2 will have the Gold Standard Application
    Cluster for
  • Testing from afar
  • Demonstrations locally
  • Diagnostics and configuration archiving

EVL
14
UvA OptIPuter Mems-Switched Clusters
WS Web Services AAA IETF Standard
Authorization, Authentication, Accounting
CdL
15
UvA StarPlane
CPUs
R
CPUs
R
CPUs
R
NOC
UvA-VLE UvA-MM VU ULeiden TUDelft
CPUs
R
CPUs
R
CdL
16
Summary How to OptIPuterize an
Application-Based Cluster
  • Procure a suitable cluster with appropriate
    display/storage/NICs for your application
  • Install it connect it to your campus network
    install Linux w/Rocks
  • Create a LAN capable of VLAN assignments (you
    need a network switch enough ports)
  • Hook your LAN via VLANs to the LambdaGrid
  • Via a US GLIF Exchange Wave, e.g.,
  • CAVEWaveUIC/UCSD/UW
  • PacificWaveUW/UCSD
  • AtlanticWave LSU/NYC
  • HOPI/NLR
  • At an international GLIF Exchange Point, e.g.,
  • NetherLight, T-LEX, HK-Light, MoscowLight,
    UK-Light, NorthernLight, MAN LAN, PNWGP,
    StarLight, more coming
  • Control it with OptIPuter Middleware
  • Install demonstration/diagnostic software and
    test/compare your cluster performance with the
    Gold Standard OptIPuter nodes at UIC, UCSD, UvA

17
The Gold Standard OptIPuter Testbed
  • UCSD/Calit2 and SDSC
  • StarLight
  • UIC/EVL
  • UvA/SARA
  • People
  • Clusters
  • Lambda Connectivity
  • Middleware
  • Measurement/Monitoring
  • Displays

18
OptIPuter Gold Standard Testbed Issues
Demonstrate by iGrid2005, Answers by Next AHM
  • Monitoring and Administration
  • Measurement unification
  • Addressing schemes
  • Security schemes
  • Topology discovery and allocation
  • Interaction with GLIF partner NOCs
  • DV/HDTV collaboration technology
  • Long-term staffing for OptIPuter support

19
OptIPuterize!
  • Propose an application or network experiment for
    iGrid2005
  • Blurbs Due Monday to maxine_at_uic.edu
  • Test your application on the OptIPuter clusters
    at Calit2, EVL, StarLight, UvA/SARA
Write a Comment
User Comments (0)
About PowerShow.com