Title: ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!)
1ALICE-USAGrid-Deployment Plans(By the way,
ALICE is an LHC Experiment, TOO!)
- Or
- (We Sometimes Feel Like and AliEn in our own
Home) - Larry PinskyComputing Coordinator
- ALICE-USA
2ALICE-USA Institutions
- 1 Creighton University
- 2 Kent State University
- 3 Lawrence Berkeley National Laboratory
- 4 Michigan State University
- 5 Oak Ridge National Laboratory
- 6 The Ohio State University
- 7 The Ohio Supercomputing Center
- 8 Purdue University
- 9 University of California, Berkeley
- 10 University of California, Davis
- 11 University of California, Los Angeles
- 12 University of Houston
- 13 University of Tennessee
- 14 University of Texas at Austin
- 15 Vanderbilt University
- 16 Wayne State University
Already Official Members of ALICE
Major Computing Sites
3ALICE Computing Needs
From lthttp//pcaliweb02.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.htmlgt as posted 25 Feb. 2005 From lthttp//pcaliweb02.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.htmlgt as posted 25 Feb. 2005 From lthttp//pcaliweb02.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.htmlgt as posted 25 Feb. 2005 From lthttp//pcaliweb02.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.htmlgt as posted 25 Feb. 2005 From lthttp//pcaliweb02.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.htmlgt as posted 25 Feb. 2005
Table 2.6 T0 Sum T1s Sum T2s Total
CPU (MSI2K) Peak 7.5 13.8 13.7 35
Transient Storage (PB) 0.44 7.6 2.5 10.54
Permanent storage (PB/year) 2.3 7.5 0 9.8
Bandwidth in (Gbps) 8 2 0.075
Bandwidth out (Gbps) 6 1.5 0.27
4ALICE-USA Target
One Full External T1 with Full Share of
Supporting T2 CapabilitiesNet in the US Based
on 6 External T1s
year 2008 2009 2010
total 20 40 100
(ALICE-USA sum MSI2K) CPU 0.69 1.38 3.44
ALICE-USA sum (PB) Disk 0.25 0.51 1.26
ALICE-USA sum (PB/yr) Perm. St. 0.19 0.38 0.94
ALICE-USA sum (Gbps) Network 0.769 1.538 3.845
Each Major US site CPU 0.23 0.46 1.15
(1/3 ALICE-USA sum) Disk 0.08 0.17 0.42
Perm. St. 0.06 0.13 0.31
Network 0.256 0.513 1.282
Note OSC is a Member of ALICE and has made
this Commitment Now
5ALICE-USA Commitments
- OSC is commited now to getting NSF funding to
Acquire this Level of Support. - LBL (NERSC) UH are DOE funded and Commited to
supplying these resources contingent upon DOEs
approval of the ALICE-USA EMCAL project. - All three institutions CONTINUE TO SUPPORT THE
DATA CHALLENGES - DOE is Currently well into the decision process
regarding budgeting the Construction of EMCAL by
ALICE-USA for ALICE. Funding to support
Prototyping has been provided
6ALICE-USA Data Challenge Support
- Since 2002, ALICE-USA has provided significant
support for the Data Challenges. - Most recently (2004) ALICE-USA supplied 14 (106
MSI2k-Hours) of the total (755 MSI2k-Hours) CPU
and external storage capacity. - For 2005-2007 ALICE-USA intends to supply a
similar fraction from existing commitments.
7ALICE-USA Grid Middleware
- We will support ALICEs needs with whatever
Middleware is consistent with them - As Well As what is consistent with our local
needs in the US - Our institutions are participating in OSG in the
US, and some are members of PPDG.
8Simplified view of the ALICE Grid with AliEn
9ALICE VO interaction with various Grids
10Some Issues
- ALICE software will have to blend with many GRID
infrastructures. - ALICE will use resources that will include many
different platforms. (e.g. AliEn, PROOF and
AliROOT now run on a variety of platforms such as
IA32, IA64, and G5s). - The detailed OS versions cannot be mandated on
all resources that will need to be used. - ALICE File Catalogs, Task Queues and Production
Manager will interface directly to the UI/RBs
Local Services. - ALICE is evolving towards a Cloud model of
distributed computing and away from a rigid
MONARC model T-1s are distinguished
from T-2s by local MS capability and not tasks
11Meeting Next Week _at_ LBL
- There will be a meeting at LBL next Friday, June
10, to discuss ALICE and OSG specifically
12A Joint Grid Project Between Physics Departments
at Universities in Texas
- Initiated by the High Energy (Particle) Physics
Groups - To Harness Unused Local Computing Resources
13In Support of HiPCAT
- High Performance Computing Across Texas
-
- (HiPCAT) is a consortium of Texas institutions
that use advanced computational technologies to
enhance research, development, and educational
activities. These advanced computational
technologies include traditional high performance
computing (HPC) systems and clusters, in addition
to complementary advanced computing technologies
including massive data storage systems and
scientific visualization resources. The advent of
computational grids -- based on high speed
networks connecting computing resources and grid
'middleware' running on these resources to
integrate them into 'grids' -- has enabled the
coordinated, concurrent usage of multiple
resources/systems and stimulated new methods of
computing and collaboration. HiPCAT institutions
support the development, deployment, and
utilization of all of these advanced computing
technologies to enable Texas researchers to
address the most challenging computational
problems.
14And TIGRE
- The Texas Internet Grid for Research and
Education (TIGRE) project goal is to build a
computational grid that integrates computing
systems, storage systems and databases,
visualization laboratories and displays, and even
instruments and sensors across Texas. TIGRE will
enhance the computational capabilities for Texas
researchers in academia, government, and industry
by integrating massive computing power. Areas of
research which will benefit in particular
biomedicine, energy and the environment,
aerospace, materials science, agriculture, and
information technology.
15Setting Up THEGrid
- THEGrid has set up a Grid infrastructure using
existing hardware in Physics Departments on
campuses in Texas - Initially, a Grid3-like approach was taken using
VDT (Going to OSG Soon) - Local unused resources were harnessed using
Condor
16Using THEGrid
- Individual Students and Faculty at each
participating campus can submit batch jobs! - Jobs are submitted through a local portal on each
campus - The middleware distributes the submitted jobs to
one of the available locations throughout
THEGrid - The output from each job is returned to the user
17THEGrid
Texas Tech will run The OSG VOM Server