US National Science Foundation Cooperative Agreement ANI9730202 to University of Illinois at Chicago - PowerPoint PPT Presentation

1 / 53
About This Presentation
Title:

US National Science Foundation Cooperative Agreement ANI9730202 to University of Illinois at Chicago

Description:

'Providing US research institutions with high-performance access to outstanding ... Institut des Sciences Nucleaires de Grenoble, LPNHE, Universites Paris VI and ... – PowerPoint PPT presentation

Number of Views:72
Avg rating:3.0/5.0
Slides: 54
Provided by: maxi61
Category:

less

Transcript and Presenter's Notes

Title: US National Science Foundation Cooperative Agreement ANI9730202 to University of Illinois at Chicago


1
US National Science Foundation Cooperative
Agreement ANI-9730202 to University of Illinois
at Chicago
  • Providing US research institutions with
    high-performance access to outstanding RE
    institutions in the European region connected by
    the European NRNs infrastructures

2
Euro-Link Leadership
  • Manuel Delfino and Olivier Martin, CERN
  • Ralphi Rom and Avi Cohen, IUCC
  • Peter Villemoes, NORDUnet
  • Dany Vandromme, RENATER2
  • Kees Neggers, SURFnet
  • Tom DeFanti and Maxine Brown, UIC/EVL

3
Euro-Link Scientific Applications
  • European participation/collaborations at iGrid
    2000

Scenes from iGrid 2000 http//www.startap.net/igr
id2000
4
Scientific ApplicationsCERN
Networked experiments of the European Laboratory
for Particle Physics (CERN) CERN provides
experimental facilities for particle physics
experiments, mainly in the domain of high-energy
physics. Four very large experiments in man-made
caverns intersect the LEP tunnel, constituting
about half of CERNs total experimental program
for the 1990s. 50 participating institutions,
several hundred physicists, in five continents.
CERNs Large Electron Positron (LEP) collider, in
a 27km tunnel the largest machine of this type
in the world
http//www.cern.ch
5
Scientific ApplicationsCERN
The Globally Interconnected Object Databases
(GIOD) Project The Compact Muon Solenoid (CMS)
detector will cleanly detect the diverse
signatures of new physics at the Large Hadron
Collider (LHC), due to be operational in 2005. It
will identify and precisely measure muons,
electrons and photons over a large energy range.
The data is overwhelming. Even though the data
from the CERN CMS detector will be reduced by a
factor gt107, it will amount to over a Petabyte
(1015 bytes) of data per year accumulated for
scientific analysis. Caltech, CERN, Hewlett
Packard, others
Particle physicists are engaged in large
international projects to address a massive data
challenge, with special emphasis on distributed
data access.
http//pcbunn.cithep.caltech.edu
6
Scientific ApplicationsCERN
  • Distributed Particle
  • Physics Research
  • California Institute of Technology, USA, USA
  • CERN

This application demonstrates remote viewing and
analysis of particle physics events. The
application is the front end to an engineered
object-oriented global system that incorporates
grid middleware
for authentication and resource discovery, a
distributed object database containing several
terabytes of simulated events, and a component
that enables queries issued by the front-end
application to be matched to available computing
resources in the system (matchmaking service).
http//pcbunn.cacr.caltech.edu http//iguana.web.c
ern.ch/iguana/ http//vrvs.cern.ch http//cmsdoc
.cern.ch/orca
7
Scientific ApplicationsIsrael
Interactive Simulation in the Field of Plant
Nutrition This US-Israel Bi-national
Agricultural Research and Development (BARD)
Foundation project involves hypotheses testing
and result evaluation, using an interactive
graphic model. SimRoot, a 3D model developed at
Penn State, graphically describes the deployment
of plant root systems in soil and predicts the
performance of plants under various environmental
conditions, depending on their physiological
characteristics. Tel Aviv University and Penn
State University
8
Scientific ApplicationsIsrael
Visualization of Acetylcholinesterase Nature's
Vacuum Cleaner Acetylcholinesterase (AcChoEase)
is an enzyme that plays a key role
in the human nervous system. In vertebrates,
nerve impulses travel from cell to cell
by means of chemical messenger.
When an electrical impulse reaches
the end of one cell, messenger molecules
acetylcholine (AcCho), in this case are
released to diffuse though the
fluid-filled, intercellular, synaptic gap. Upon
reaching the destination cell, the AcCho
molecules dock into special receptors triggering
a new electrical impulse. Much like a vacuum
cleaner, the enzyme AcChoEase is constantly
sweeping up and hydrolyzing AcCho during this
process, so that the whole cycle can begin again.
Chemicals that inhibit the action of AcChoEase
are being used in the treatment of glaucoma,
myasthenia gravis and, experimentally,
Alzheimer's disease. In spite of the ability to
exploit the enzyme, its precise mechanism of
operation is still a mystery. Cornell
University, USA Weizmann Institute of Science,
Israel
9
Scientific ApplicationsNordic Countries
European Incoherent SCATter (EISCAT) EISCAT
studies the interaction between the Sun and the
Earth as revealed by disturbances in the
magnetosphere and the ionized parts of the
atmosphere (these interactions also give rise to
the spectacular aurora, or Northern Lights). The
Incoherent Scatter Radar technique requires
sophisticated technology and EISCAT engineers are
constantly involved in upgrading the systems.
The EISCAT Scientific Association operates radars
and receivers in several Nordic cities. Several
Incoherent Scatter facilities are distributed
about the world, such as Millstone Hill
Observatory (MHO), in Westford, Massachusetts.
http//www.eiscat.uit.no/
10
Scientific ApplicationsSweden
  • Steering and Visualization
  • of a Finite-Difference Code
  • on a Computational Grid
  • Royal Institute of Technology, Sweden
  • University of Houston, USA

This application enables computational steering
of electromagnetic simulations across distributed
resources using interactive visualization in a
virtual-reality environment.
www.nada.kth.se/erike/progress
11
Scientific ApplicationsFrance
The DØ Experiment is a worldwide collaboration of
scientists conducting research on the fundamental
nature of matter. The experiment is located at
the world's premier high-energy accelerator, the
Tevatron Collider, at the Fermi National
Accelerator Laboratory. Worldwide collaborations
include Fermilab, Brookhaven National Lab CERN
Cornell University DESY, Germany KEK, Japan
Lawrence Berkeley Lab and Stanford Linear
Accelerator Center several French institutions
(DAPNIA/SPP, Centre de Physique des Particules de
Marseille, Institut des Sciences Nucleaires de
Grenoble, LPNHE, Universites Paris VI and VII,
and Laboratoire de LAccelerateur Lineaire.
http//www-d0.fnal.gov http//www-dapnia.cea.fr
12
Scientific ApplicationsFrance
Sloan Digital Sky Survey The SDSS enables the
automatic, systematic study and exchange of data
of stars, galaxies, nebula, and large-scale
structure. Participants Institut d
Astrophysique de Paris, Johns Hopkins University,
CFHT Corp. (Hawaii) University of Hawaii
Institute for Astronomy, Honolulu.
http//www.sdss.org/sdss.html
13
Scientific ApplicationsThe Netherlands
  • ALIVE Architectural
  • Linked Immersive
  • Environment
  • Academic Computing Services Amsterdam (SARA), The
    Netherlands
  • Electronic Visualization Laboratory, University
    of Illinois at Chicago, USA

ALIVE is used to evaluate the usability of
collaborative virtual reality for architectural
design. The ALIVE project started February 1999
at SARA in cooperation with EVL and OMA (Office
for Metropolitan Architecture). In February 1998,
Rem Koolhaas won the Richard H. Driehaus
Foundation International Design Competition for
the new Campus Center at Illinois Institute of
Technologys historic Mies van der Rohe campus
www.sara.nl www.archfonds.nl www.iit.edu/departmen
ts/pr/masterplan/mccortribucamcen.html
14
Euro-Link Future Plans
  • GryPhyN support
  • Gridsmajor funded European activities like the
    Data Grid and DataTAG
  • Access Grid deployment to Europe
  • Wavelengths and Optical Switching with DTF,
    I-WIRE, CAnet, SURFnet CERN
  • Continue Native Multicast deployment to Euro-Link
    universities, following TransPAC lead
  • Continue IPv6 activities with ESnet, Abilene,
    Euro-Link universities
  • iGrid2002 in Amsterdam

15
  • Thomas A. DeFanti, Maxine Brown
  • Principal Investigators, STAR TAP
  • Linda Winkler, Andy Schmidt
  • STAR TAP Engineering

16
Who is Connected to STAR TAP?
  • Next Generation Internet (NGI) initiative is a US
    Government RD funding program for
    high-performance networks and applications
  • STAR TAP is the international component of
    US government agency research networks
  • DREN DOD Research Education Network
  • ESnet DOE Energy Science research network
  • NREN NASA Research Education Network
  • NISN NASA Information Science Network
  • vBNS NSF very-high-speed Backbone Network Service

www.ngi.gov
17
Who is Connected to STAR TAP?
  • Internet2 is a consortium of over
    185 universities working in
    partnership with industry and
    government to develop and deploy advanced
    network applications and technologies
  • Abilene, an Internet2 backbone network,
    interconnects at STAR TAP,as well as other
    peering locations, with many National Research
    Education Networks with whom Internet2
    collaborates.

www.internet2.edu
18
STAR TAPScience Technology And Research Transit
Access Point
Canada
Japan Korea (2) Taiwan Singapore (2) Australia
(2) China
Norway Iceland Sweden Finland Denmark Russia Franc
e Netherlands CERN Israel Ireland Belgium Europe/D
ANTE United Kingdom
Chile, Brazil ANSP, Brazil RNP, Mexico
US Abilene, DREN, ESnet, NISN, NREN, vBNS/vBNS
www.startap.net http//www.ucaid.edu/abilene/html/
itnpeerparticipants.html (Abilene
ITN) http//www.canet3.net/optical/peering_info/in
tl_peering.html (CAnet3 ITN)
19
What is STAR TAP?
  • Infrastructure
  • Engineering support
  • Engineering advanced services
  • QoS testbeds
  • Marketing
  • Documentation
  • Conference participation
  • Host annual meetings
  • Education and outreach
  • Team building
  • Liaison to network consortia
  • Application communities
  • Connection Point
  • Racks at AADS NAP
  • front/back views

Managed by UIC in collaboration with Argonne
National Laboratory, Northwestern (MREN/iCAIR)
and Indiana University operated by Ameritech
Advanced Data Services
www.startap.net
20
STAR TAP Engineering Support
  • Global NOCfirst line of support
  • STAR TAP Engineersdesign and deployment

www.startap.net/ENGINEERING noc.startap.net
21
STAR TAP Engineering Advanced Services
  • IPv4 Router
  • IPv6 Router (6TAP with ESnet)
  • Multicast
  • Globus Middleware/Grid Services Package
    (with Argonne/ISI)
  • Network Measurement (NLANR AMP)
  • Web Caching (NLANR)
  • QoS testbeds
  • Application-level network performance metrics
  • Experimental protocols for high-bandwidth
    applications
  • Distributed STAR TAP (Teleglobe, CW)
  • Abilene/CAnet3 International Transit Networks

22
STAR TAP August 2001 External Partner Circuits
installed to NAP
  • FAPESP (Brazil) DS3 anticipated late 8/01
  • RNP2 (Brazil) DS3 complete to NY, awaiting
    customers router
  • HEAnet - OC3c being built 8/24
  • BELnet - OC3c being built for 8/24
  • KISTI - OC3c being built for 8/24

23
Ameritech Chicago NAP
  • ATM Hub for STAR TAP and MREN
  • Approximately 130 connections
  • Peaks to 6Gbps load
  • Problem port speed capped at OC12c
  • Co-location for STAR TAP services
  • Migration to StarLight planned to simplify access

24
STAR TAP Documentation
  • Internet Dissemination
  • www.startap.net
  • General information
  • Engineering information
  • Application information
  • Publications

25
Team Building iGrid 2000 at INET 2000July
18-21, 2000, Yokohama, Japan
  • 14 regions Canada, CERN, Germany, Greece, Japan,
    Korea, Mexico, Netherlands, Singapore, Spain,
    Sweden, Taiwan, UK, USA
  • 24 demonstrations featuring technical innovations
    in tele-immersion, large datasets, distributed
    computing, remote instrumentation, collaboration,
    streaming media, human/computer interfaces,
    digital video and high-definition television, and
    grid architecture development, and application
    advancements in science, engineering, cultural
    heritage, distance education, media
    communications, and art and architecture

www.startap.net/igrid2000
26
STAR TAP ActivitiesAnnual Advisory Committee
Meetings
STAR TAP International Advisory Committee Meeting
at INET 2000, Yokohama, July 2000
STAR TAP Meeting at Stockholm, June 2001
27
  • For more information
  • www.startap.net

28
The Future of STAR TAPEnabling e-Science
Research
29
What is StarLight?
  • StarLight is jointly managed and engineered by
  • International Center for Advanced Internet
    Research (iCAIR), Northwestern University
  • Electronic Visualization Laboratory (EVL),
    University of Illinois at Chicago
  • Mathematics and Computer Science Division (MCS) ,
    Argonne National Laboratory

30
What is StarLight?
  • StarLight is an advanced optical infrastructure
    and
  • proving ground for network services optimized for
  • high-performance applications

710 N. Lake Shore Drive, Chicago Abbott Hall,
Northwestern University
Chicago view from 710
31
710 N.
Lake Shore Drive
  • Northwestern
  • Carriers POPs
  • Chicago NAP


32
StarLight Infrastructure
  • StarLight is a large research-friendly
    co-location facility with space, power and fiber
    that is being made available to university and
    national/international network collaborators
    as a point of presence in Chicago

33
StarLight ColocationNorthwestern - 710 North
Lake Shore Drive
  • Currently DataComm facility for NUs DMS100
  • Space for 40 equipment racks plus carriers
  • Shared space w/ NOC and Engineering Staff
  • Battery and Diesel backup
  • Fiber
  • Dual Entrance
  • Multiple carriers Qwest, L3, ATT, SBC, MFN,

34
StarLight Legacy Infrastructure
35
StarLight StatusCarriers
  • Currently Present
  • Ameritech
  • ATT
  • Qwest
  • Soon to be Installed
  • GlobalCrossing
  • Teleglobe

36
StarLight Connections
  • STAR TAP (AADS NAP) is connected via two OC-12c
    ATM circuits now operational
  • The Netherlands (SURFnet) is bringing two OC-12c
    POS from Amsterdam to StarLight on September 1,
    2001 and a 2.5Gbps lambda to StarLight on
    September 15, 2001
  • Abilene will soon connect via GigE
  • Canada (CAnet3/4) is connected via GigE and
    10GigE
  • I-WIRE, a State-of-Illinois-funded dark-fiber
    multi-10GigE DWDM effort involving Illinois
    research institutions is being built. 18 strands
    to the Qwest Chicago PoP are in
  • NSF Distributed Terascale Facility (DTF) 4x10GigE
    network being engineered by PACI and Qwest.
  • NORDUnet will be using StarLights OC-12 ATM
    connection

37
StarLight Infrastructure
  • StarLight is a production GigE and trial 10GigE
    switch/router facility for high-performance
    access to participating networks

38
StarLight is Operational
  • Fiber/Equipment at StarLight
  • Existing Fiber Ameritech, ATT, Qwest
  • Soon to be installed MFN, Global Crossing,
    Teleglobe, and others
  • StarLight Equipment installed
  • Cisco 6509 with GigE (plans for 10GigE)
  • IPv6 Router
  • Juniper M10 (GigE and OC-12 interfaces)
  • Cisco LS1010 with OC-12 interfaces
  • Data mining cluster with GigE NICs
  • Visualization/video server cluster with GigE NICs
  • 40 racks initially reserved for co-location
  • SURFnets 12000 GSR

39
StarLight Infrastructure
  • Soon, Star Light will be an optical switching
    facility for wavelengths

40
Evolving StarLightOptical Network Connections
Asia-Pacific
SURFnet, CERN
Vancouver
CAnet4
CAnet4
Seattle
Portland
U Wisconsin
NYC
Chicago
PSC
San Francisco
IU
NCSA
Asia-Pacific
DTF 40Gb
Los Angeles
Atlanta
San Diego (SDSC)
AMPATH
41
StarLight Goals 2002-2005
  • Metropolitan optical switching at 10Gb
  • International wavelength switching hub,
    replicated in Amsterdam and other places, TBD
  • Host advanced technology trials like
  • DWDM
  • Lambda conversion
  • Optical routing
  • Ultra high-definition video, visualiztion and VR
  • Terascale computing connections
  • Petabyte data mining collaborations

42
StarLight Services
  • StarLight encourages NLANR collaborations
  • To provide tools and techniques for (university)
    customer-controlled 10 Gigabit network flows
  • To create general control mechanisms from
    emerging toolkits, such as Globus, to provide
    Grid network resource access and allocation
    services.
  • To provide a range of new tools, such as GMPLS
    and OBGP, for designing, configuring and managing
    optical networks and their components
  • To provide a new generation of tools for
    appropriate monitoring and measurements at
    multiple levels

43
StarLight Thanks
  • StarLight planning, research, collaborations, and
    outreach efforts at the University of Illinois at
    Chicago are made possible, in part, by funding
    from
  • National Science Foundation (NSF) awards
    ANI-9980480, ANI-9730202, EIA-9802090,
    EIA-9871058, and EIA-0115809
  • NSF Partnerships for Advanced Computational
    Infrastructure (PACI) cooperative agreement
    ACI-9619019 to the National Computational Science
    Alliance
  • State of Illinois I-WIRE Program, and UIC cost
    sharing
  • Northwestern University for providing space,
    engineering and management
  • Argonne National Laboratory for StarLight and
    I-WIRE network engineering and planning
    leadership
  • NSF/ANIR, Bill St. Arnaud of CANARIE, Olivier
    Martin of CERN, and Kees Neggers of SURFnet for
    global optical networking leadership and Steve
    Goldstein of NSF for years of nudging and
    nurturing.
  • NSF/ACIR and NCSA/SDSC for DTF opportunities
  • Larry Smarr and Rick Stevens for off-scale
    leadership

44
Bring Us Your Lambdas!
  • www.startap.net/starlight
  • www.icair.org
  • www.evl.uic.edu
  • www.mcs.anl.gov
  • tom_at_uic.edu

45
Application-CentricOptical Network
MonitoringRequirements / QuestionsJason Leigh
Tom DeFanti Euro-Link, NCSA and EVL
46
Application-Centric Monitoring of Optical Networks
  • Need to know how much bandwidth is available so
    that programs can adapt accordingly. (E.g., if
    larger bandwidth is available them streaming
    visualization may increase image quality.)
  • Need to know how much bandwidth an application
    actually make use of so it requests responsible
    QoS allocations in the future
  • What kind of (trusted box) is needed to manage
    requests and queries to our new GigE/10GigE
    switch/routers?
  • We assume that the applications will do per-flow
    monitoring by themselves
  • We dont know which previous tools (e.g., pchar,
    pipechar, etc.) will work with GigE/10GigE
    switch/routers

47
TeraGrid Wide Area Optically Connected PC
clusters
  • Basic TeraNode
  • PCs Hard to walk and chew gum at the same time
  • Modest cluster 20 processors, each with
  • GHz processor
  • 2-4G RAM/processor
  • 20G disk/processor
  • Gigabit ethernet card
  • Optical networks become the system bus for these
    nodes
  • TeraNode-D (TND) Data
  • Multi-terabyte RAID disks
  • TeraNode-C (TNC) Computation
  • Large number of processors and memory
  • TeraNode-V (TNC) Visualization
  • High end graphics cards and lots of memory
  • Various configuration of displays from passive
    stereo walls, to CAVEs to tiled displays

48
TeraNodes in ActionInteractive Visual Tera
Mining(Visual Data Mining of Tera Byte Data Sets)
Chicago
Amsterdam
NWU, NCSA, ANL etc
Data Mining Servers (TND-DSTP)
Data Mining Servers (TND-DSTP)
Data Mining Servers (TND-DSTP)
Parallel Data Mining Correlation (TNC)
  • Problem is to touch a Terabyte of data
    interactively and to visualize it
  • 100M/s 24 hours to access 1 Terabyte of data
  • 500M/s 4.8 hours using a single PC
  • 10G/s 14.4 minutes using 20 node PC cluster
  • Need to parallelize data access and rendering

Parallel Visualization (TNV)
Tera Map
Tera Snap
Tera Snap
49
TeraNode (Cluster) Monitoring Tool
  • Monitor CPU utilization, available memory and
    in/out bandwidth per PC connected to GigE and to
    SAN (Myrinet) prefer this as a transparent
    program running in the background
  • We need this additional info because bandwidth
    utilization alone does not explain the level of
    utilization of memory, disk, cpu
  • Results streamed to external daemon to collect in
    a flat file or database that can be
    queried/visualized in real time to see the latest
    performance or historical information

50
Qs w/out AsMonitoring for Optical Networks
  • Is there a standard way/protocol to securely
    query and control optical switches e.g. do
    optical switches talk SNMP? Can an application
    use this to talk to the switches to allocate
    GMPLS light paths?
  • Can this query tell us how much bandwidth is
    available and how much is going through any given
    path?
  • Does this query occur over an external ATM link
    or is this also done over the optical network?
  • Can we deploy a monitoring server at each switch
    that applications can query without requiring
    incredible degrees of security?
  • Can currently existing tools be reused for
    GigE-level and beyond, networks? E.g. OCXMons,
    Pchar?
  • How can we verify that a path has been created,
    that our packets are actually going over it?
  • Who is doing work at which levels? E.g. who is
    building middleware to do this.

51
New Protocol Work
  • What would best be done by NLANR pros?
  • What are good student projects?
  • How to propogate protocols to and educate the
    e-science community?

52
Modeling of High Performance Protocols
  • Goal Try and figure out, under what
    circumstances, given whatever information is
    available about the underlying networks, which
    protocol to use.
  • Examine protocols like Parallel TCP, FEC,
    Reliable UDP to
  • 1. identify the tweakable parameters
  • 2. Develop reusable simulation scripts using
    NS-2
  • 3. Perform measurements to confirm simulation
    model
  • E.g. of tweakable parameters
  • Parallel TCP - of sockets, window size, etc
  • FEC number of data packets to redundant
    packets, packet size, etc
  • Reliable UDP packet size, number of packets
    before transmitting missed packets, etc

53
Extensions of High Performance Protocols
  • Socket-cached parallel TCP comparison of
    throughput when parallel sockets are kept open
    all the time vs when they are opened and closed
    between transactions, as a function of file size.
  • FEC augment FEC with reliable TCP to provide a
    fully reliable low-latency transmission scheme.
    Examine throughput, latency, jitter under this
    new scheme.
  • Reliable UDP augment RUDP to include congestion
    control augment reliable UDP to make it more of
    a streaming protocol rather than bulk file
    transfer protocol. At what point does it make
    more sense to use TCP rather than RUDP?
  • Extension of above high throughput protocols to
    move tera/peta scale data files from one RAID
    cluster to another RAID cluster- ie PC nodes
    connected via GigE all sending data in parallel
    to remote PC nodes all participating in the
    movement of a massive data file. E.g. parallel
    TCP or RUDP on parallel processors
Write a Comment
User Comments (0)
About PowerShow.com