Toward a Campus-Wide Grid Computing System - PowerPoint PPT Presentation

Loading...

PPT – Toward a Campus-Wide Grid Computing System PowerPoint presentation | free to view - id: 7a6ef3-MDliY



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Toward a Campus-Wide Grid Computing System

Description:

Toward a Campus-Wide Grid Computing System An Overview of The Lattice Project Adam L. Bazinet and Michael P. Cummings Laboratory of Molecular Evolution – PowerPoint PPT presentation

Number of Views:18
Avg rating:3.0/5.0
Slides: 57
Provided by: AdamB210
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Toward a Campus-Wide Grid Computing System


1
Toward a Campus-Wide Grid Computing System
  • An Overview of The Lattice Project
  • Adam L. Bazinet and Michael P. Cummings
  • Laboratory of Molecular Evolution
  • Center for Bioinformatics and Computational
    Biology

2
Outline
  • Grid computing motivation
  • Goals of The Lattice Project
  • Basic architecture
  • Our current production Grid system
  • Implementation details
  • Results of usage
  • Demo
  • Research and development
  • Task Computing with colleagues at Fujitsu
  • Creating Grid-enabled workflows

3
Grid Computing
  • Definition A model of distributed computing that
    uses resources that are geographically and
    administratively disparate. Individual users can
    access computers and data transparently, without
    having to consider location, operating system,
    account administration, and other details. In
    Grid computing the details are abstracted, and
    the resources are virtualized.

4
Why Go Grid?
  • Scientific problems are solved faster
  • Parallel execution means higher throughput
  • Make compute resources a commodity
  • Analogous to the electrical power grid
  • Foster growth and interaction in the research
    community
  • Use of the Grid spans departments and domains
  • Grid resources are typically shared resources

5
Outline
  • Grid computing motivation
  • Goals of The Lattice Project
  • Basic architecture
  • Our current production Grid system
  • Implementation details
  • Results of usage
  • Demonstration
  • Research and development
  • Task Computing with colleagues at Fujitsu
  • Creating Grid-enabled workflows

6
The Lattice Project Initial Goals
  • Develop a Grid system for scientific research
    that
  • Speeds up workflows by Grid-enabling various
    programs
  • Is simple and intuitive
  • Takes advantage of heterogeneous resources
  • Is capable of managing large numbers of jobs
    (thousands)
  • Supports multiple users and lowers the barriers
    to getting involved
  • Is community-driven and supported

7
Principles of Design
  • Make use of well supported open source software
  • Globus Toolkit
  • BOINC
  • Condor
  • Engineered software should be scalable, modular,
    and robust
  • Expose programs as well-defined services
  • Arbitrary user-supplied code cannot be run

8
Outline
  • Grid computing motivation
  • Goals of The Lattice Project
  • Basic architecture
  • Our current production Grid system
  • Implementation details
  • Results of usage
  • Demo
  • Research and development
  • Task Computing with colleagues at Fujitsu
  • Creating Grid-enabled workflows

9
Terminology
  • Client A Grid user interface OR a machine that
    performs computation
  • Grid Service A Grid-enabled program
  • Scheduler Decides where Grid jobs will run
  • Resource Executes Grid jobs

10
Basic Architecture (1 of 3)
11
Basic Architecture (2 of 3)
12
Basic Architecture (3 of 3)
13
Outline
  • Grid computing motivation
  • Goals of The Lattice Project
  • Basic architecture
  • Our current production Grid system
  • Implementation details
  • Results of usage
  • Demo
  • Research and development
  • Task Computing with colleagues at Fujitsu
  • Creating Grid-enabled workflows

14
Software Components
  • Globus Toolkit version 3.2.1
  • Backbone of the Grid
  • http//www.globus.org/
  • Condor-G
  • Grid-level scheduler / resource broker
  • http//www.cs.wisc.edu/condor/
  • BOINC Berkeley Open Infrastructure for Network
    Computing
  • SETI_at_home-style desktop grid
  • http//boinc.berkeley.edu/
  • Custom components
  • GSBL, GSG, Globus-BOINC adaptor, MDS-matchmaking
    bridge, user interface(s), administrative
    scripts, and much more

15
Globus Toolkit 3
  • Key components
  • Globus Core
  • Grid service hosting environment
  • GSI Grid Security Infrastructure
  • Uses public key cryptography
  • Secures communication
  • Authenticates and authorizes Grid users
  • WS GRAM Job management
  • GASS Point to point file transfer
  • MDS2 Information provider

16
Condor-G
  • Condor-G is part of the Condor suite
  • Resources and jobs send Condor-G descriptions of
    themselves called ClassAds
  • Condor-G matches Grid jobs to suitable resources,
    then submits and manages them
  • This process is called matchmaking

17
BOINC
  • Most novel feature of our Grid
  • Public computing model
  • Untrusted resources
  • Is potentially our largest resource
  • We have targeted 3 platforms
  • Windows / Linux x86 / Mac OS X

18
Our Current Grid System
19
User Interface
  • The Grid Brick a machine used to submit Grid
    jobs
  • Our primary interface for Grid users
  • Command line clients mimic normal program
    execution
  • Lattice Intranet
  • Provides instructions for submitting jobs and
    managing data input and output
  • Provides tools for describing and monitoring jobs
  • Other possibilities
  • Web portal model of job submission
  • A client capable of composing complex workflows
    using Task Computing and Semantic Web technology
    developed by collaborators at Fujitsu

20
Demonstration
  • Job submission

21
Basic Architecture Client/Service
22
Grid Client Stack
Command-line Interface
Perl
Java
lattice_submit / lattice_retrieve
Service-specific submit / retrieve scripts
Client.pm base Perl module
Service-specific submit / retrieve classes
GSBL Grid Service Base Library
Globus API
Service-specific templates and stubs are
created by the Grid Service Generator
23
Grid Service Stack
Grid Service Hosting Environment, a.k.a. the
container
Java
Service-specific Implementation
GSBL Grid Service Base Library
Globus API
Service-specific templates and stubs are
created by the Grid Service Generator
24
Tools for Writing Grid Services
  • Grid Service Base Library (GSBL)
  • Java API for building Grid services with the
    Globus Toolkit
  • Shields programmers from having to work with the
    Globus API directly
  • Provides a high-level interface for operations
    such as job submission and file transfer
  • Grid Service Generator (GSG)
  • Simplifies the process of creating Grid Services
  • Intended for use with GSBL

25
GSBL Design and Features
  • Classes for
  • Clients and services (base classes)
  • Argument description and processing
  • File transfers
  • Job submission and control
  • Security configuration
  • Java synchronization and Globus notifications to
    paper over event-based model

26
Grid Service Generator
  • Deploying a Grid service with GT3 is absurdly
    complicated
  • Many files, namespaces lots of potential typos
  • GSG takes as input a few parameters (service
    name, location, an XML argument description, etc)
    and generates all requisite configuration files
    and skeleton Java classes

27
Grid Services
Application Condor (Linux/UNIX) BOINC BOINC BOINC
Application Condor (Linux/UNIX) Linux X86 Win32 Mac OS X
BLAST1 Yes No No No
Clustal W Yes Yes Yes Yes
CNS Yes Yes Yes No
Lamarc Yes Yes Yes Yes
MDIV Yes Yes Yes Yes
Migrate-N Yes Yes Yes Yes
Modeltest Yes Yes Yes Yes
MrBayes Yes Yes Yes Yes
ms Yes Yes Yes Yes
Muscle Yes Yes Yes Yes
PAUP2 Yes No No No
Phyml Yes Yes Yes Yes
Pknots Yes Yes Yes Yes
Seq-gen Yes Yes Yes Yes
Snn Yes Yes Yes Yes
ssearch Yes Yes Yes Yes
Structure3 Yes No No No
28
Grid Services
  • Creating Grid Services requires
  • Knowledge of the application
  • Techniques for compiling and porting the
    application to various platforms
  • Knowledge of the infrastructure so it can be
    effectively tested and deployed
  • Challenges
  • Maintaining bodies of Grid Service code as the
    number of applications grow and new versions of
    applications are released
  • Minimizing the number of updates that need to be
    applied when the framework changes

29
Basic Architecture - Scheduling
30
Condor-G ClassAds
  • Resources and jobs send Condor-G descriptions of
    themselves called ClassAds
  • Jobs require certain capabilities of resources
  • Resources advertise their capabilities
  • Similar to a dating service central broker
    points pairs of compatible jobs/resources at each
    other

31
Condor G ClassAds
32
Generating ClassAds
  • Job ClassAds are generated by the Condor-G job
    manager
  • Job requirements are specified in the Grid
    service configuration files
  • Resource ClassAds are generated by extracting
    information from MDS
  • Lattice information providers supply data
    required for matchmaking

33
Monitoring and Discovery System (MDS2)
  • Globus information services component
  • LDAP based
  • Answers questions like
  • What resources are available?
  • What capabilities do these resources have?
  • What is the load on these resources?
  • This in turn allows for intelligent decisions to
    be made in areas such as scheduling and resource
    accounting

34
MDS to Condor-G Diagram
35
Basic Architecture - Resources
36
Current Grid Resources
  • http//lattice.umiacs.umd.edu/resources/
  • UMIACS Condor pool
  • 400 processors
  • BOINC pools
  • Clients on campus gt 100
  • Public (off-campus) clients gt 1000

37
BOINC
  • Works on the pull model, that is
  • One or more servers create workunits
  • Clients connect asynchronously, pull down work,
    and return the results
  • Clients are relatively lightweight and easy to
    install and manage
  • One client can crunch work for multiple projects
  • Participants can join teams and are given credit
    for the work they complete
  • http//lattice.umiacs.umd.edu/boinc_public

38
Globus-BOINC Adapter
  • Consists of a number of components that allow us
    to run Grid Services on BOINC
  • BOINC job manager
  • Custom validator and assimilator
  • Registers BOINC with Globus as a GRAM-addressable
    resource
  • BOINC compatibility library eases the process of
    porting applications to BOINC

39
Demonstration
  • Check job status

40
Research Projects Using the Grid
  • The Laboratory of David Fushman has run
    proteinprotein docking algorithms on Lattice
  • CNS is the featured Grid service in this project
  • Floyd Reed and Holly Mortensen from the
    Laboratory of Sarah Tishkoff have run a number of
    population genetics simulations
  • MDIV and IM are the featured Grid services
  • The Laboratory of Michael Cummings has run
    statistical phylogenetic analyses
  • GSI is the featured Grid service

41
Results of Grid Usage
  • IM 0.13 CPU years (BOINC)
  • MDIV 4.93 CPU years (BOINC)
  • CNS 12.4 CPU years (BOINC)
  • GSI 94.05 CPU years (Condor)
  • Total 111.51 CPU years

42
Outline
  • Grid computing motivation
  • Goals of The Lattice Project
  • Basic architecture
  • Our current production Grid system
  • Implementation details
  • Results of usage
  • Demo
  • Research and development
  • Task Computing with colleagues at Fujitsu
  • Creating Grid-enabled workflows

43
GT4 Research and Development
  • We are currently upgrading the Grid system to use
    Globus Toolkit 4.0
  • GT4 adheres strictly to emerging and established
    Web service standards
  • Actively developed and supported
  • Many components have been greatly improved
  • GridFTP/RFT (will replace GASS)
  • WS GRAM
  • MDS4 (XML based replaces MDS2, LDAP based)
  • Our basic architecture will remain the same, and
    the upgrade will be made easier because of tools
    we have already developed (GSBL, GSG)

44
Outline
  • Grid computing motivation
  • Goals of The Lattice Project
  • Basic architecture
  • Our current production Grid system
  • Implementation details
  • Results of usage
  • Demo
  • Research and development
  • Task Computing with colleagues at Fujitsu
  • Creating Grid-enabled workflows

45
Fujitsu Task Computing Research
  • http//taskcomputing.org/
  • Fujitsu Laboratories of America, Inc.
  • College Park, Maryland

46
Task Computing (TC)
  • Goals of Task Computing
  • Lets ordinary end-users accomplish complex tasks
    easily in environments rich with applications,
    devices, and services
  • Tasks can be composed on-the-fly from the
    services found in each environment and on the
    Internet
  • Then, tasks can be shared and edited later by
    others to suit their needs
  • Based on the Semantic Web Services technology, TC
    provides many ways to interact with tasks
    comprised of services

47
The Core Idea
Play Jeffs Video Dial Contact from Outlook View
Weather of Maryland
The key is Semantic Service Descriptions (SSDs)
for resources
Web Services
OS/Application (.NET, etc.)
Device (UPnP)
Video from DV
Video from DV
Add into Outlook
Dial
Open
Save
Print
Add into Outlook
Dial
Open
Save
Print
Weather of
Weather of
Aerial Photo of
Aerial Photo of
Jeffs Video
Play (Video)
Play (Audio)
View
Contact from Outlook
Jeffs Video
Play (Video)
Play (Audio)
View
Contact from Outlook
OS/Application
Devices
Web Pages
48
Task Computing Environment (TCE)
  • Windows software to realize TC
  • Core is written in Java
  • Requirements
  • Windows XP with IIS (Internet Information Server)
    installed
  • Java Runtime Environment (for TC clients only)
  • Single Windows installer with
  • TC clients in many modalities (graphical, voice,
    Web-based)
  • More than 50 kinds of TC services
  • OS, application functions, devices
  • Many mechanisms for dynamic service creation
  • Web Services APIs for TC functions to program
    your own application
  • Available from http//taskcomputing.org for
    research institutes

49
TC Architecture
User
Task Computing Environment
Presentation Layer
Task ComputingClient
Applications
Web-basedClient
Web Service API
Middleware Layer
Discovery Engine
Execution Execution Monitoring Engine
ServiceComposition Engine
Management Tools
Semantic ServiceDescription
Semantic ServiceDescription
Semantic ServiceDescription
Semantic ServiceDescription
Service Layer
Service
Service
Service
Service
E-service
Realization Layer
Device
Application
Content
50
TC Process
Discover
Execute
Create Task
Execute
By Web
By Email
Save
Share
Edit
51
For Your Applications
  • Clients
  • Service Discovery
  • Task Creation/Edit/Save
  • Task Execution
  • Services
  • Create Web Services
  • .NET for Windows, Axis for Java
  • Provide Semantic Service Description (SSD)
  • Reuse ontology (schema)
  • Or create your own ontology
  • Create SSD based on the ontology
  • Publish SSD

Tools available
P Provided by TCE or available
52
Problems In Scientific Domain
  • Complex workflow generation involving many
    interconnected software tools
  • Requires expert knowledge of each tools
  • Too many variations of tools
  • Too many tools!
  • Requires sophisticated level of computing
  • Format conversions
  • Different platforms
  • Difficult for
  • young scientists to start out
  • existing scientists to explore new
    tools/workflows
  • Need a new environment where scientists can
  • easily experiment with combining several tools to
    accomplish their research goals
  • without requiring sophisticated computing support
  • Abstract the IT details into the hands of domain
    scientists

53
Task from Distributed Resources
  • Tasks can be composed quickly by end-users from
    distributed and heterogeneous resources, then,
    they can be easily shared and later edited

Task
Task
Task
S
S
S
S
S
S
S
54
Bio-STEER
  • Application of Task Computing and Semantic Web
    technologies in bioinformatics domain
  • Collaboration work with UMD
  • Professor Mike Cummings, Center For
    Bioinformatics and Computational Biology
  • Lattice Project
  • Offers growing list of bio services on grid
  • http//lattice.umiacs.umd.edu

55
Bio-STEER Benefits
Data into semantic layer
  • User-friendly environments
  • Frees from computing support to build workflows
  • Enable convenient sharing of workflows (sharing
    not just data, but process)
  • Promotes collaboration among scientists
  • High reusability and changeability
  • Encourages scientists to experiment

Reuse, share, modify workflows
Easily track progress
Easy composition
Interact only when necessary otherwise automated
execution
Scientists can concentrate on their research
56
Demo
57
What is needed?
  • Development of semantic services and web services
  • However,
  • One time cost applications are stable and
    limited
  • Semantic services are easily shared and modified
  • Bio IT support team can now concentrate on
    building better infrastructure including semantic
    services
  • Near-term enhancements expected
  • Better support from TCE concurrent branch
    execution and better error handling/recovery
  • User feedback support for additional features
    and services based on feedback

58
More Information
  • Lattice Website
  • http//lattice.umiacs.umd.edu/
  • Task Computing
  • http//taskcomputing.org/
About PowerShow.com