Network and Hardware Infrastructure Team Year 4: Toward the Gold Standard OptIPuter and SC06 - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Network and Hardware Infrastructure Team Year 4: Toward the Gold Standard OptIPuter and SC06

Description:

Tom DeFanti and Phil Papadopoulos, Co-PIs. Network and ... Standard coaxial cable can be used for connection with the GPS clock ... Tom West, Ron Johnson ... – PowerPoint PPT presentation

Number of Views:106
Avg rating:3.0/5.0
Slides: 17
Provided by: jerry161
Category:

less

Transcript and Presenter's Notes

Title: Network and Hardware Infrastructure Team Year 4: Toward the Gold Standard OptIPuter and SC06


1
Network and Hardware Infrastructure Team Year 4
Toward the Gold Standard OptIPuter and SC06
Campus, Metro, Regional, National, and Global
Deployment
UvA
UIC/NU
GSFC
UCSD
Tom DeFanti and Phil Papadopoulos, Co-PIs
2
Network and Hardware Infrastructure Team
Testbeds Campus/Regional
  • UCSD Campus/Regional
  • iGrid 2005 temporarily connected 10x10Gb waves to
    the Calit2 building
  • 2x10Gb and 5x1GE now persist (CAVEwave, TeraWave,
    CineGrid 1GE, 2GE to UCI, 2GE to USC)
  • More than 100 64-bit nodes connected
  • Deploy full bisection networks to
  • NCMIR (20Gb),
  • BIRN (10Gb),
  • SIO (10Gb),
  • Calit2 Visualization displays (20Gb to the wall,
    10Gb to the 4K Prism),
  • SDSC (40Gb),
  • CAMERA (10Gb),
  • plus Campus 10GE shared network
  • Demonstrate gt2x10Gb sustained storage read/write
  • Connection Goals
  • 2GE to UCI
  • 10Gb to USC,
  • 10Gb to UW for SIO/Looking,
  • 1GE to CICESE

3
UCSD Quartzite Dense Wave Division Multiplexing
Phil With the Myri10G cards, when we get the
DWDM tranceivers, we can drop a lambda directly
into a node.
4
UCSD Calit2 building OptIPuter Visualization
5
UCIs New Capability for the OptIPuter
  • Why another GPS device like NTP?
  • time precision 250 nanoseconds
  • How much does it cost
  • 800 board plus cost of antenna, coax cable,
    mounting)
  • Meinberg GPS Antenna/ Converter Unit
  • Standard coaxial cable can be used for connection
    with the GPS clock
  • A distance of up to 600 meters between receiver
    and antenna is possible without additional
    amplifier
  • Remotely powered by the connected GPS receiver

6
Network and Hardware Infrastructure Team
Testbeds Metro/Regional
  • Metro Chicago/Midwest Regional
  • UIC 2x10GE and 8x1GE to StarLight
  • NU OMNInet Phase 2 testbed 24x10GE (over 4
    nodes)
  • U. Michigan 10Gb
  • UIUC NCSA 10Gb
  • ONR 10Gb TRECC

7
NU/UIC Metro Chicago OMNInet Phase 2
Optical Switch
Optical Switch
600 N. Federal
1890 W. Taylor
1x 10G Wan
1 x 10G Wan
High Performance L2 Switch
High Performance L2 Switch
Ge (x2)
Ge (x2)
Optical Switch
Optical Switch
1 x 10G Wan
1 x 10G Wan
High Performance L2 Switch
High Performance L2 Switch
Ge (x2)
Ge (x2)
710 North Lake Shore Drive
750 North Lake Shore Drive
8
UIC/UCSD/UW/UM Teleconferencing Soup
  • DV SD resolution
  • 10Mb easy, 1 sec round trip (RT)
  • HDV lt HD
  • 10Mb now deployed very latent gt 2 sec RT
  • DVCpro HD
  • 100Mb needs development potentially low latency
  • HD currently deployed by UW Research Channel, U
    Michigan
  • Latency 160 to 400 ms predicted RT
  • 1.5GB most expensive, can be cropped/compressed
    to under 1Gb
  • GigE camera HD quality
  • 700Mb needs development lt 400ms RT rough
    measurement
  • SD STEREO 2 x SD spectacular low latency 1/3
    sec RT
  • 220Mb deployed and tested.
  • Plus speed of light (100-250ms global scale)
  • Need lt250ms Latency for Collaborationoops!
  • Advicewait a bit

9
UIC/UCSD 10GE CAVEWave on the National LambdaRail
EVL
NEW!
NEW!
Map Source John Silvester, Dave Reese Tom West,
Ron Johnson
National Scale CAVEWave Connects Chicago to
Seattle to San Diegowill go to Washington D.C.
in April and connect to Venter Institute, GSFC,
NCSA Access, RRC
10
New! The OptiGrid 10Gb TeraWaves Implemented on
the TeraGrid between UIC and UCSD
Can now add TeraGrid partners from ETF campuses
via MPLS tunnels over more than a dozen 10Gb links
Many other 10Gb links to Chicago, Seattle, etc.
11
CPUs
Dutch National Scale SURFnet (UvA)
StarPlane DWDM Backplane
R
CPUs
R
CPUs
R
SURFnet
NOC
university
SURFnet
WSAAA NOC
WSAAA
C P U s
s w i t c h
CPUs
R
CPUs
R
CdL
12
UIC/UCSD/UvA/NU Global Testbed 2x10Gb available
between NetherLight and StarLight
13
Global OptIPuter Goal Deploy OptIPuter Partner
Middleware for Production Early Adopters
20x10b global bandwidth now available to add
planned OptIPuter partners in Canada, Czech
Republic, Japan, Korea, Taiwan, UK, China,
Russia, Sweden, CERN, and Australia
14
Year 4 Goal Scalable Full 10Gb Optimization
  • Proposed Year 4 goal was 41 national bisection
    bandwidth over 10Gb lambdas
  • UIC/NU/StarLight and Calit2/SDSC demonstrated
    10Gb link saturation in December 2006 and January
    2006
  • Two paths (CAVEwave L1 and the TeraGrid L2 MPLS)
    were tested with similar results giving better
    than 9Gb performance in both directions
  • disk-to-disk (21 bisection bandwidth20
    processors used)
  • disk-to-memory (1.61 bisection bandwidth16
    processors used)
  • memory-to-memory (11 bisection bandwidth 10
    processors used)
  • We have beaten the Year 4 goals for 10-20
    processor clusters

15
Year 4 Network and Hardware Infrastructure Team
Detailed Technology
  • NICs, E-Switches, and Middleware (UCSD, UIC,
    GSFC, UvA, NU)
  • 10GE to selected cluster nodes (UCSD, NU, UIC,
    GSFC, UvA)
  • Infiniband (GSFC/ARC/NRL)
  • Control of L1/L2 e-switches (UvA, NU, UIC, UCSD)
  • Automated discovery and mapping of OptIPuter
    network (UCSD)
  • Deploy first DWDM on campus (UCSD Quartzite)
  • Optical Signaling, Control and Management (NU,
    UIC, UvA)
  • Backplane efforts towards SC06 (persistence,
    reliability, deployment)
  • Exploration of methods for control planes
  • Secure Photonic Interdomain Negotiator (SPIN)
  • Photonic Devices (UCSD, NU, UvA, GSFC, UIC)
  • Evaluate/implement DWDM hardware at several
    scales

16
Network and Hardware Infrastructure TeamYear 4
Summary Goals
  • Deploy Gold Standard OptIPuter Service Definition
    for compute/storage/visualization/collaboration
    clusters installed at the five OptIPuter testbeds
    to push scaling, distance and bandwidth limits,
    and let new adopters plugnplay (UCSD,
    UIC/NU/StarLight, UvA/NetherLight, UCI, and
    GSFC).
  • Propagate 10GigE NICs (UCSD, UIC/NU, UvA, GSFC).
  • Exploit switches installed in Chicago
    (StarLight), Seattle (PNWGP), and Amsterdam
    (NetherLight), which implement OptIPuter L1
    switching and control planes.
  • Persistent Gold Standard OptIPuters will be
    deployed, documented, and applied
  • for presentation at GLIF in Tokyo in September
  • as coordinated OptIPuter experiments at SC06 in
    Tampa in November.
Write a Comment
User Comments (0)
About PowerShow.com