Accelerating the Scientific Exploration Process with Scientific Workflows - PowerPoint PPT Presentation

Loading...

PPT – Accelerating the Scientific Exploration Process with Scientific Workflows PowerPoint presentation | free to download - id: ba74b-ZDc1Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Accelerating the Scientific Exploration Process with Scientific Workflows

Description:

Accelerating the Scientific Exploration Process with Scientific Workflows – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 94
Provided by: geon
Learn more at: http://www.geongrid.org
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Accelerating the Scientific Exploration Process with Scientific Workflows


1
Accelerating the Scientific Exploration Process
with Scientific Workflows
  • Ilkay ALTINTAS
  • Director, Scientific Workflow Automation
    Technologies Laboratory
  • San Diego Supercomputer Center, UCSD
  • altintas_at_sdsc.edu

2
"Why does this magnificent applied science,
which saves work and makes life easier, bring us
so little happiness? The simple answer runs
Because we have not yet learned to make sensible
use of it." Albert Einstein, in an
address at Cal Tech, 1931. (Harper)
3
A process cannot be understood by stopping it.
Understanding must move with the flow of the
process, must join it and flow with it.
(First Law of Mentat), Frank Herbert, Dune.
  • Observing / Data Microscopes, telescopes,
    particle accelerators, X-rays, MRIs,
  • microarrays, satellite-based
    sensors, sensor networks, field studies
  • Analysis, Prediction / Models and model
    execution Potentially large
  • computation and visualization

Todays scientific method


  • Observe ? Hypothesize ? Conduct experiment ?
  • Analyze data ? Compare results and Conclude ?
  • ? Predict


One can add more to add to this picture network,
Grid, portals,
4
Scientific workflows emerged as an answer to the
need to combine multiple Cyberinfrastructure
components in automated process networks.
So,what is a scientific workflow?
5
The Big Picture Supporting the Scientist
From Napkin Drawings
to Executable
Workflows
Source Mladen Vouk (NCSU)
Conceptual SWF
Executable SWF
Here John Blondin, NC State Astrophysics Terascal
e Supernova Initiative SciDAC, DOE
6
SWF Systems Requirements
  • Design tools-- especially for non-expert users
  • Ease of use-- fairly simple user interface having
    more complex features hidden in the background
  • Reusable generic features
  • Generic enough to serve to different communities
    but specific enough to serve one domain (e.g.
    geosciences) gt customizable
  • Extensibility for the expert user
  • Registration, publication provenance of data
    products and process products (workflows)
  • Dynamic plug-in of data and processes from
    registries/repositories
  • Distributed WF execution (e.g. Web and Grid
    awareness)
  • Semantics awareness
  • WF Deployment
  • as a web site, as a web service,Power apps (a
    la SciRUN II)
  • Interoperability with other SWF systems

7
Phylogeny Analysis Workflows
8
Promoter Identification Workflow
Source Matt Coleman (LLNL)
9
PointInPolygon algorithm
10
TSI Workflow-2 (D. Swesty)
11
TSI-2 Workflow Overview
Delay
SourceTerence Critchlow, Xiaowen Xin (LLNL)
Queued
Check job status
Submit batch request at NERSC
Running or Done
Identify new complete files
If Running
Transfer completed correctly
Transfer files to HPSS
Transfer completed correctly
Transfer files to SB
Delete file
Generate movie
Generate thumbnails
12
TSI-2 Executable Workflow Screenshot
SourceTerence Critchlow, Xiaowen Xin (LLNL)
13
TSI-2 Web Interface for Monitoring
SourceTerence Critchlow, Xiaowen Xin (LLNL)
14
TSI-2 Workflow Running Interface
SourceTerence Critchlow, Xiaowen Xin (LLNL)
15
CPES Fusion Simulation Workflow
  • Fusion Simulation Codes (a) GTC (b) XGC with
    M3D
  • e.g. (a) currently 4,800 (soon 9,600) nodes Cray
    XT3 9.6TB RAM 1.5TB simulation data/run
  • GOAL
  • automate remote simulation job submission
  • continuous file movement to secondary analysis
    cluster for dynamic visualization simulation
    control
  • with runtime-configurable observables

Submit FileMover Job
Submit Simulation Job
Execution Log (gt Data Provenance)
Select JobMgr
Overall architect ( prototypical user) Scott
Klasky (ORNL) WF design implementation Norbert
Podhorszki (UC Davis)
16
CPES Analysis Workflow
This workflow and the associated features was
emphasized in the SC06 Kepler tutorial. The
tutorial has acquired the highest points from the
attendees.
  • Concurrent analysis pipeline (_at_Analysis Cluster)
  • convert analyze copy-to-Web-portal
  • easy configuration, re-purposing

Reusable Actor Class
Specialized Actor Instances
SpecializeActor instances
SpecializeActor instances
Pipelined Execution Model
Inline Documentation
Inline Display
Easy-to-edit Parameter Settings
Overall architect ( prototypical user) Scott
Klasky (ORNL) WF design implementation Norbert
Podhorszki (UC Davis)
17
Scientific Workflow Systems
  • Combination of
  • data integration, analysis, and visualization
    steps
  • automated "scientific process"
  • Mission of scientific workflow systems
  • Promote scientific discovery by providing tools
    and methods to generate scientific workflows
  • Create an extensible and customizable graphical
    user interface for scientists from different
    scientific domains
  • Support computational experiment creation,
    execution, sharing, reuse and provenance
  • Design frameworks which define efficient ways to
    connect to the existing data and integrate
    heterogeneous data from multiple resources
  • Make technology useful through users monitor!!!

18
Kepler is a Scientific Workflow System
www.kepler-project.org
  • and a cross-project collaboration
  • 3rd Beta release (Jan 8, 2007)
  • Builds upon the open-source Ptolemy II framework

19
Kepler is a Team Effort
Griddles
Nimrod
Resurgence
SRB
Cipres
NLADR
Contributor names and funding info are at the
Kepler website!!
LOOKING
20
Usage Statistics
  • Source code access
  • 154 people accessed source code
  • 30 members have write permission
  • Projects using Kepler
  • SEEK (ecology)
  • SciDAC (molecular bio, astrophysics, ...)
  • CPES (plasma simulation, combustion)
  • GEON (geosciences)
  • CiPRes (phylogenetics)
  • ROADnet (real-time data)
  • LOOKING (oceanography)
  • CAMERA (metagenomics)
  • Resurgence (computational chemistry)
  • NORIA (ocean observing CI)
  • NEON (ecology observing CI)
  • ChIP-chip (genomics)
  • COMET (environmental science)
  • Cheshire Digital Library (archival)
  • Digital preservation (DIGARCH)
  • Cell Biology (Scripps)
  • DART (X-Ray crystallography)
  • Ocean Life

Kepler downloads Total 9204 Beta
6675 redWindows blueMacintosh
Source Matt Jones, NCEAS
21
Kepler Software Development Practice
  • How does this all work?
  • Joint CVS -- special rules!
  • Projects like SDM, Cipres, Resurgence have their
    specialized releases out of a common
    infrastructure!
  • Open-source (BSD)
  • Website Wiki -- http kepler-project.org
  • Communications
  • Busy IRC channel
  • Mailing lists Kepler-dev, Kepler-users,
    Kepler-members
  • Telecons for design discussions
  • 6-monthly hackatons
  • Focus group meetings workshops and conference
    calls
  • How will it all persist?

22
Actors are the Processing Components
  • Actor
  • Encapsulation of parameterized actions
  • Interface defined by ports and parameters
  • Port
  • Communication between input and output data
  • Without call-return semantics
  • Model of computation
  • Communication semantics among ports
  • Flow of control
  • Implementation is a framework
  • Examples
  • Simulink(The MathWorks)
  • LabVIEW ( from National Instruments)
  • Easy 5x (from Boeing)
  • ROOM(Real-time object-oriented modeling)
  • ADL(Wright)

Actor-Oriented Design
Source Edward A. Lee, UC Berkeley
23
Some actors in place for
  • Generic Web Service Client and Web Service
    Harvester
  • Customizable RDBMS query and update
  • Command Line wrapper tools (local, ssh, scp,
    ftp, etc.)
  • Some Grid actors-Globus Job Runner,
    GridFTP-based file access, Proxy Certificate
    Generator
  • SRB support
  • Native R and Matlab support
  • Interaction with Nimrod and APST
  • Communication with ORBs through actors and
    services
  • Imaging, Gridding, Vis Support
  • Textual and Graphical Output
  • more generic and domain-oriented actors

24
Directors are the WF Engines that
  • Implement different computational models
  • Define the semantics of
  • execution of actors and workflows
  • interactions between actors
  • Ptolemy and Kepler are unique in combining
    different execution models in heterogeneous
    models!
  • Kepler is extending Ptolemy directors with
    specialized ones for web service based workflows
    and distributed workflows.
  • Process Networks
  • Rendezvous
  • Publish and Subscribe
  • Continuous Time
  • Finite State Machines
  • Dataflow
  • Time Triggered
  • Synchronous/reactive model
  • Discrete Event
  • Wireless

25
Vergil is the GUI for Kepler
Data Search
Actor Search
  • Actor ontology and semantic search for actors
  • Search -gt Drag and drop -gt Link via ports
  • Metadata-based search for datasets

26
Actor Search
  • Kepler Actor Ontology
  • Used in searching actors and creating conceptual
    views ( folders)
  • Currently more than 200 Kepler actors added!

27
Data Search and Usage of Results
  • EarthGrid
  • Discovery of data resources through local and
    remote services
  • SRB,
  • Grid and Web Services,
  • Db connections
  • Registry of datasets on the fly using workflows

28
Current Advances and Users
  • Data and Actor search
  • EarthGrid data access system
  • Kepler Component Library
  • Kepler Archive (KAR) format
  • Integrated support for LSID identifiers for all
    objects
  • Object Manager and cache
  • Web service execution
  • RExpression MatlabExpression
  • Redesigned user interface
  • Authentication subsystem
  • Null-value handling
  • Documentation
  • Semantics support
  • annotation, search, workflow validation,
    integration
  • Collection-oriented workflows
  • Domain-specific actors for case studies
  • Provenance framework
  • Grid computing support
  • NIMROD, Globus, ssh, ...
  • Kepler Users
  • User interface users
  • Workflow developers
  • Scientists
  • Software Developers
  • Engineers
  • Researchers
  • Batch users
  • Portals
  • Other workflow systems as an engine

29
Kepler System Architecture
Authentication
GUI
Kepler GUI Extensions
Vergil
Documentation
Provenance Framework
Kepler Object Manager
Smart Re-run / Failure Recovery
SMS
Type System Ext
ActorData SEARCH
Kepler Core Extensions
Ptolemy
30
Kepler can be used as a batch execution engine
  • Configuration phase
  • Subset DB2 query on DataStar

Portal
Monitoring/ Translation
Subset
  • Interpolate Grass RST, Grass IDW, GMT
  • Visualize Global Mapper, FlederMaus, ArcIMS

Scheduling/ Output Processing
Grid
31
Advantages of Scientific Workflow Systems
  • Formalization of the scientific process
  • Easy to share, adapt and reuse
  • Deployable, customizable, extensible
  • Management of complexity and usability
  • Support for hierarchical composition
  • Interfaces to different technologies from a
    unified interface
  • Can be annotated with domain-knowledge
  • Tracking provenance of the data and processes
  • Keep the association of results to processes
  • Make it easier to validate/regenerate results and
    processes
  • Enable comparison between different workflow
    versions
  • Execution monitoring and fault tolerance
  • Interaction with multiple tools and resources at
    once

32
Evolving Challenges For Scientific Workflows
  • Access to heterogeneous data and computational
    resources and link to different domain knowledge
  • Interface to multiple analysis tools and workflow
    systems
  • One size doesnt fit all!
  • Support computational experiment creation,
    execution, sharing, reuse and provenance
  • Manage complexity, user and process interactivity
  • Extensions for adaptive and dynamic workflows
  • Track provenance of workflow design (
    evolution), execution, and intermediate and final
    results
  • Efficient failure recovery and smart re-runs
  • Support various file and process transport
    mechanisms
  • Main memory, Java shared file system,

33
Evolving Challenges For Scientific Workflows
  • Support the full scientific process
  • Use and control instruments, networks and
    observatories in observing steps
  • Scientifically and statistically analyze and
    control the data collected by the observing
    steps,
  • Set up simulations as testbeds for possible
    observatories
  • Come up with efficient and intuitive workflow
    deployment methods
  • Do all these in a secure and easy-to-use way!!!

34
New Project REAP
  • Management and Analysis of Observatory Data using
    Kepler Scientific Workflows
  • The vision
  • An integrated environment for analyzing data from
    observatories
  • Funded 2006-2009
  • NSF CEOP
  • Jones, Altintas, Baru, Ludaescher, Schildhauer
  • Partners
  • UCSB, SDSC/UCSD, UCDavis, UCLA, OpenDAP, OSU
  • Lead institution NCEAS/UCSB

35
An End-to-End CI for Observatories
  • Scientists view
  • Access remote real-time and archived data streams
  • as if they were locally generated!
  • Design and execute scientific workflows
  • Processing steps scientific models
  • Data raw or derived from sensor networks and
    data archives
  • Combine data streams in hybrid analytical models
  • System Engineers view
  • View and monitor observatory infrastructure
    components
  • Model the impacts of system changes before they
    are executed
  • Modify the configuration of the observatory
    sensors and network
  • Overall goal To bring together, for the first
    time, seamless access to sensor data from real-
    time data grids with analytical tools and
    sophisticated modeling capabilities of scientific
    workflow environments

36
New Project Kepler C.O.R.E.
  • Development of Kepler CORE -- A Comprehensive,
    Open, Robust, and Extensible Scientific Workflow
    Infrastructure
  • Ludäscher, Altintas, Bowers, Jones, McPhillips
  • Extensibility Governance Sustainability
  • Goals
  • Reliable
  • refactored build
  • more modular design
  • improved engineering practices
  • Independently extensible
  • Open architecture, open project With improved
    governance!

37
Some success stories
Our collaborative efforts with the SDM Center
over the last few years, has enabled us to
publish several peer reviewed papers, and
abstracts that were an important factor in
maintaining current funding in the low-dose
radiation research program (http//lowdose.tricity
.wsu.edu/). Having access to SDM Center
scientists and their automated workflow and
parallel processing tools over the next two years
will be critical for the identification and
characterization of regulatory element profiles
of IR-responsive genes and will provide valuable
understanding of the genetic mechanisms of
IR-response and should provide powerful
biological indicators of genetic susceptibilities
for tissue and genetic damage. -
Matthew A. Coleman, Bioscience Program, Lawrence
Livermore National Laboratory, 2005
"We are finally seeing some nice payoffs in terms
of easy-to-use computational chemistry software
with some unique capabilities. The framework
illustrates nicely the interplay between
technology and applications - including the
compute software, the middleware, and the grid
computing capabilities. - Kim K. Baldridge,
PI, Resurgence project, 2004
During SciDAC I, members of the Terascale
Supernova Initiative (TSI), now being expanded to
PSI, collaborated extensively with members of
your team. And the benefits were palpable. The
successful deployment of a scientific workflow
management and automation tool, which arose out
of a fruitful collaboration between Doug Swesty
and Eric Myra of Stony Brook and Terence
Critchlow of LLNL, is one example. Moreover,
others of your effort (e.g., Mladen Vouk) are
based at PSI partner sites and engaged in helping
some of our application scientists (in this case,
John Blondin), which will further enhance our
overall research exchange.
- Anthony Mezzacappa,
PI, DOE SciDAC Petascale Supernova Initiative,
2005
The CIPRES project has as a key goal the creation
of software infrastructure that allows developers
in the community to easily contribute new
software tools, ... The modular nature of Kepler
met our requirements, as it is a JAVA platform
that allows users to construct linear, looping,
and complex workflows from just the kinds of
components. The CIPRES community is developing.
By adopting this tool, we were able to focus on
developing appropriate framework and registry
tools for our community, and use the friendly
Kepler user application interface as an entrée to
our services. We are very excited about the
progress we have made, and think the tool will be
revolutionary for our user base. -
Mark A. Miller, PI, NSF CIPRES project, 2006
  • It can help you too!

38
Using Scientific Workflows in GEON
39
Utilizing Kepler in GEON
  • An extensible, easy to use, workflow design and
    prototyping tool
  • Integrating heterogeneous local and remote tools
    in a single interface
  • Web and Grid services
  • GIS services
  • Legacy application integration via Shell-Command
    actor
  • Remote tools via SSH, SCP and GridFTP
  • Relational and spatial databases access
  • Reusable generic and domain specific actors
  • Support for High Performance Computations
  • Job submission and monitoring
  • Logging of execution trace and registering
    intermediate products
  • Data provenance and failure recovery
  • Portal accessibility.
  • Deployment of workflows to the GEON portal
  • Harvesting data and tools from repositories

40
Integrating heterogeneous local and remote tools
in a single interface
  • Generic Web Service Client and Web Service
    Harvester
  • GIS Services
  • Legacy Application Integration via Command Line
    wrapper tools, e.g. GMT
  • RDBMS and Spatial Databases Access
  • Remote Tools Access via SSH, SCP and GridFTP
  • Some Grid actors-Globus Job Runner, GridFTP-based
    file access, Proxy Certificate Generator
  • Generic and domain-oriented actors
  • Classification and interpolation algorithms
  • Native R support
  • Imaging, Gridding, Vis Support
  • Textual and Graphical Output
  • more

41
Some Features
  • Support for High Performance Computations
  • Job submission and monitoring
  • Logging of execution trace and registering
    intermediate products
  • Data provenance and failure recovery
  • Portal accessibility
  • Deployment of workflows to the GEON portal
  • Harvesting data and tools from repositories
  • Direct access to data and tools registered to the
    GEON portal
  • A web service harvester
  • Storage Resource Broker (SRB)

42
GEON Workflows Examples
43
GEON Mineral Classification Workflow
An early example Classification for naming
Igneous Rocks.
44
GEON Mineral Classifier Workflow
45
PointInPolygon algorithm
46
Enter initial inputs, Run and Display results
47
Output Visualizers
Browser display of results
48
Integration Scenario A-type query
  • Classifying A-types from an Igneous rock database
  • Integrating between Relational and Spatial
    (shapefiles) databases to query and interactively
    display GIS results
  • Reusing existing and generic Kepler components
    (Classifier, JDBC)

Ghulam Memon, Ashraf Memon
49
Classification sub-workflow runs for
each body, each sample and each diagram
Reusing The Mineral Classifier
50
Output
51
Extraction of Datasets on the Fly
Translating query xml response to web service xml
input format.
worldImage
XML SOAP response
52
Extraction of Datasets on the Fly
53
Image of the resulting dataset
Sample
54
GEON Dataset Registration
(as in geonSearch)
55
GEON Dataset Registration
Registering
56
Putting it all together
57
Beach Balls Workflow
  • GOAL Integrate seismic focal mechanisms with
    image services

58
ArcIMS-based Web Services
ArcIMS
SOAP Server
network
XML, ZIP, ASCII, SRB ID
Create Service/ Image
Contains
Successful Output
Service/Image Created
Exception Handler
ASCII To Shape
User Friendly Messages
ASCII
ShapeTo ImageService
ASCII ToMap
Shape Files
XML ToMap
Logger
SRB ID
XML
XML To Shape
Shape Files
SRB
Shape Files
59
Beach Balls Workflow Output
60
Gravity Modeling Workflow
Observed Gravity
Topography
Pluton map
Sediments
Moho
Output
Residual Map
Differencecalculator
Densities
Source (GEON) Dogan Seber, Randy Keller
Interactive 3D model Defining possible depth
distribution of plutons
61
Kepler as a Modeling Tool Gravity Modeling
Workflow
  • Comparing between synthetic and observed gravity
    models of heterogeneous data sources. Creating a
    residual map of the difference using ESRI
    services and displaying it on a web browser
  • Portrays Kepler as a prototyping tool (ToDo)
  • Adjustable parameter-wise

Joint work betweenSDSC and UTEP.
62
Gravity Modeling Workflow
63
R. Haugerud, U.S.G.S
LiDAR Introduction
Survey
Interpolate / Grid
Process Classify
D. Harding, NASA
Point Cloud x, y, zn,
Analyze / Do Science
64
The Computational Challenge
  • LiDAR generates massive data volumes - billions
    of returns are common.
  • Distribution of these volumes of point cloud data
    to users via the internet represents a
    significant challenge.
  • Processing and analysis of these data requires
    significant computing resources not available to
    most geoscientists.
  • Interpolation of these data challenges typical
    GIS / interpolation software.
  • our tests indicate that ArcGIS, Matlab and
    similar software packages struggle to interpolate
    even a small portion of these data.
  • Traditionally Popularity gt Resources

65
LiDAR Difficulties
  • Massive volumes of data
  • 1000s of ASCII files
  • Hard to subset
  • Hard to distribute and interpolate
  • Analysis requires high performance computing
  • Traditionally Popularity gt Resources

66
A Three-Tier Architecture
Portal
  • GOAL Efficient LiDAR interpolation and analysis
    using GEON infrastructure and tools
  • GEON Portal
  • Kepler Scientific Workflow System
  • GEON Grid
  • Use scientific workflows to glue/combine
    different tools and the infrastructure

Grid
67
Kepler can be used as a batch execution engine
Portal
  • Configuration phase
  • Subset DB2 query on DataStar

Monitoring/ Translation
Subset
  • Interpolate Grass RST, Grass IDW, GMT
  • Visualize Global Mapper, FlederMaus, ArcIMS

Scheduling/ Output Processing
Grid
68
Lidar Processing Workflow (using Fledermaus)
Subset
d2
d2 (grid file)
d1
d1
d2
NFS Mounted Disk
69
Lidar Processing Workflow (using Global Mapper)
Subset
d2
d2 (grid file)
d1
d1
d2
NFS Mounted Disk
70
Lidar Processing Workflow (using ArcIMS)
Subset
d2 (grid file)
d1
d1
d2
NFS Mounted Disk
71
Lidar Workflow Portlet
  • User selections from GUI
  • Translated into a query and a parameter file
  • Uploaded to remote machine
  • Workflow description created on the fly
  • Workflow response redirected back to portlet

72
LIDAR POST-PROCESSING WORKFLOW PORTLET
73
Portlet User Interface - Main Page
74
(No Transcript)
75
Portlet User Interface - Parameter Entry 1
76
Portlet User Interface - Parameter Entry 2
77
Portlet User Interface - Parameter Entry 3
78
Behind the Scenes Workflow Template
79
Filled Template
80
Example Outputs
81
With Additional Algorithms
82
GLW Monitoring
  • Job management
  • A unified interface to follow up on the status of
    submitted jobs The system
  • View job metadata
  • Zoom to a specific bounding box location
  • Track errors
  • Modify a job and re-submist
  • View the processing results
  • In the future, register desired workflow products
  • Useful for publication
  • GLW is exposed to a high risk of components
    failures
  • Long running process
  • Distributed computational resources under diverse
    controlling authorities
  • Provides transparent/background error handling
    using provenance data and smart reruns

83
To Sum Up
  • is an open-source system and
    collaboration
  • was initiated in August, 2003
  • grows by application pull from contributors
  • released Beta3.0 on Jan 08, 2006
  • There is a lot more to cover and work on
  • More information http//kepler-project.org
  • Next session
  • INSTALLING AND RUNNING KEPLER
  • HANDS-ON EXERCISESß

84
BEFORE THE BREAK
  • Go to http//kepler-project.org
  • Start the download for Kepler Beta3

85
GEOSCIENCES WORKFLOW DEMOS
86
Examples
  • Searching for actors and datasets
  • Actor search for gis
  • Data search for volcanic
  • Create a Hello World! workflow
  • ltKEPLER_DIRgt/demos/getting-started/04-HelloWorld.x
    ml
  • Opening and creating workflows using R as a
    statistical tool
  • Relational Database Access and Query
  • Connect to VT Igneous rocks database
  • Database format DB2
  • URL jdbcdb2//data.sdsc.geongrid.org60000/IGNEO
    US
  • User readonly
  • Passwd read0n1y
  • Web service based workflows
  • ltKEPLER_DIRgt/demos/getting-started/06-WebServicesA
    ndDataTransformation.xml
  • Composite actors
  • Invoke a remote application SSH
  • ls to a remote directory
  • Using various interpolation algorithms
  • interpolation actor

87
Hands-on Exercises
88
Opening and Running a Workflow
  • Start Kepler
  • Open the HelloWorld.xml under the
    demos/getting-started directory in your local
    Kepler folder
  • Two options to run a workflow
  • PLAY BUTTON in the toolbar
  • RUNTIME WINDOW from the run menu

89
Modifying an Existing Workflow and Saving It
  • GOAL Modify the HelloWorld workflow to display a
    parameter-based message
  • Step by step instructions
  • Open the HelloWorld workflow as before
  • From actors search tab, search for Parameter
  • Drag and drop the parameter to the workflow
    canvas on the right
  • Double click the parameter and type your name
  • Right click the parameter and select Customize
    Name, type in name.
  • Double click the Constant actor and type the
    following
  • Hello name
  • Save
  • Run the workflow

90
Creating a HelloWorld! Workflow (p. 24)
  • Open a new blank workflow canvas
  • From toolbar File ? New Workflow ? Blank
  • In the Components tab, search for Constant and
    select the Constant actor.
  • Drag the Constant actor onto the Workflow canvas
  • Configure the Constant actor
  • right-click the actor and selecting Configure
    Actor from the menu
  • Or, double click the actor
  • Type Hello World in the value field and click
    Commit
  • In the Components and Data Access area, search
    for Display and select the Display actor found
    under Textual Output.
  • Drag the Display actor to the Workflow canvas.
  • Connect the output port of the Constant actor to
    the input port of the Display actor.
  • In the Components and Data Access area, select
    the Components tab, then navigate to the
    /Components/Director/ directory.
  • Drag the SDF Director to the top of the Workflow
    canvas.
  • Run the model

91
Using Relational Databases in a Workflow
  • GOAL Accessing a geoscience database using a
    generic database actor
  • Step by step instructions
  • In the Components and Data Access area, select
    the components tab
  • Search for database
  • Drag Open Database Connection and Database
    Query onto the canvas
  • Configure Open Database Connection with the
    following parameters
  • Database format PostgreSQL
  • Database URL jdbcpostgresql//geon17.sdsc.edu54
    32/igneous
  • Username readonly
  • Password read0n1y
  • Connect the output of Open Database Connection
    with the dbcon input port of Database Query
  • Double-click to customize the actor
  • Query SELECT FROM IGROCKS.ModalData WHERE SSID
    227
  • 227 for ssID
  • Add Display actor (from components tab), connect
    ports, add sdf director (as in previous example)
  • Run the workflow

92
Creating Web Service Workflows
  • GOAL Executing a Web Service using the generic
    Web Service client
  • Step by step instructions
  • In the Components and Data Access area, select
    the components tab
  • Search for web service
  • Drag Web Service Actor onto the canvas
  • Double click the actor, enter http//titan.geongri
    d.org8080/axis/services/GridAsciiToImageService?w
    sdl, commit
  • Double click the actor again, select
    getImageForGridAsciiString as method name,
    commit
  • Search for String Constant in the components
    tab. Drag and drop String Constant onto
    workflow canvas
  • Double click the String Constant, type a jpg
    and commit
  • Connect String Constant output with the Web
    Service Actor input
  • Search for FileReader actor and customize it to
    use KEPLER/lib/testdata/geon/gravityGrid.asc
  • Add a Display and connect its input with the
    Web Service Actor output
  • Add the SDF director
  • Run the workflow

93
SSH Actor and Including Existing Scripts in a
Workflow
  • Step by step instructions
  • Search for ssh in the Components tab in left
    pane
  • Drag SSH To Execute onto the canvas
  • Double click the actor,
  • Type in a remote host you have access to.
  • Type in your username
  • Search for String Constant in the components
    tab. Drag and drop String Constant onto
    workflow canvas
  • Double click the String Constant, type ls and
    commit
  • Connect String Constant output with the SSH To
    Execute command input (lowest)
  • Add a Display and connect its input with the
    SSH To Execute stdout output (top)
  • Add the SDF director
  • Run the workflow
  • If you have a script deployed on the server, you
    can replace the ls command to invoke the
    script.
  • e.g., perl tmp.pl

94
Using Various Displays
  • Open the 03-ImageDisplay.xml under the
    demos/getting-started directory in your local
    Kepler folder
  • Run the workflow
  • Search for browser in the components tab
  • Drag and drop Browser Display onto the canvas
  • Replace ImageJ with Browser Display (connect
    Image Converter output to Browser Display
    inputURL
  • Run workflow again
  • Replace Browser Display with a textual
    Display
  • Run workflow

95
QuestionsThanks!
Ilkay Altintas altintas_at_sdsc.edu http//www.sdsc.
edu http//kepler-project.org
About PowerShow.com