OnDemand Process Collaboration in Service Oriented Architecture - PowerPoint PPT Presentation

1 / 116
About This Presentation
Title:

OnDemand Process Collaboration in Service Oriented Architecture

Description:

... (Pedigree) Issue in SOA Systems. DoD defines the data provenance (pedigree) as ... Specifically, the questions that the pedigree study should answer are: ... – PowerPoint PPT presentation

Number of Views:23
Avg rating:3.0/5.0
Slides: 117
Provided by: sant227
Category:

less

Transcript and Presenter's Notes

Title: OnDemand Process Collaboration in Service Oriented Architecture


1
On-Demand Process Collaboration in Service
Oriented Architecture
  • W. T. Tsai
  • Computer Science Engineering Department
  • Arizona State University
  • Tempe, AZ 85287
  • wtsai_at_asu.edu

2
Agenda
  • Overview of Interoperability
  • Data Provenance
  • OASIS CPP/CPA Collaboration
  • Use Scenarios
  • Dynamic Collaboration Protocol (DCP)
  • Verification Framework for DCP
  • Case study
  • Summary

3
Interoperability
  • Interoperability is defined as
  • The ability of two or more systems or components
    to exchange information and to use the
    information that has been exchanged.
  • Interoperability is a critical issue for
    service-oriented systems.
  • But interoperability of what?

4
Kinds of Interoperability
  • Protocol interoperability
  • Allow two parties (services, processes, clients,
    or systems) to communicate with each other. For
    example, WSDL, SOAP, OWL-S, HTTP, UDDI, ebXML.
  • Protocol interoperability is the minimum
    requirements.
  • Issues QoS, performance, implementation
  • Data interoperability
  • Allow two parties (services, processes, clients,
    or systems) to exchange data.
  • MetaData, XML (data schema)
  • Issues Security, integrity, data provenance

5
Kinds of Interoperability
  • Process Interaction Allow two parties to
    communicate with each other while the processes
    are running.
  • Static arrangement Both processes and
    interactions are specified (or fixed) before
    interaction. This is traditional SOA.
  • For example, one of them is a workflow of an
    application, the other is a service.
  • Both can be services.
  • Dynamic arrangement
  • While interaction protocols are established at
    runtime.
  • But the first is to allow data exchange. (OASIS
    CPP/CPA)
  • Allow these processes to interact with the
    workflow. (ECPP/ECPA)
  • Issues This is the current challenge.

6
Interoperability in SOA
  • Following the concept of SOA, each sub-system in
    the composed complex system
  • Is a self-contained autonomous system
  • Provides services
  • Collaborates with each other and
  • Loosely couples with other systems.
  • To achieve interoperability, each system needs to
    be able to
  • Exchange data and services in a consistent and
    effective way.
  • Provide universal access capacities independent
    of platforms.

7
Interoperability in SOA (cont.)
  • For service-oriented systems, services
  • Exchange data and
  • Collaborate with fellow services in terms of
    tasks and missions.
  • While the current interoperability technologies
    such as standard interface and ontology are
    critical for SOA interoperability, they are not
    sufficient because
  • The current interface technologies provide method
    signatures only for a single service.
  • These method signatures do not provide sufficient
    information for another new system or user to
    properly use the service, e.g.
  • What is the proper calling sequence among methods
    of this service
  • What is the dependency among methods of a service
    or another service.

8
Interoperability in SOA (cont.)
  • To achieve full function, we need to expand the
    interoperability because
  • Data exchange is a small part of interoperability
    only
  • Systems need to interact with each other at
    run-time
  • One system may use the services provided by
    others and
  • Systems may need to work with some legacy
    systems.
  • To make heterogeneous systems working with each
    other, we need to have a framework which provides
    support for
  • Platform independent system service
    specification,
  • System wrapping for legacy systems, and
  • System composition and re-composition.

9
Issues in Interoperability
  • We need to design efficient, secure and scalable
    protocols at all levels.
  • Data provenance is a serious challenge in SOA
    because we have numerous data going through a SOA
    system, and we need to know the history of
    data. How reliable is it? How secure is it? What
    is the integrity level? How accurate is it?
  • Metadata how do we design the metadata we
    needed? How do we evolve metadata?
  • Process Interaction how do we specify the
    possible interaction? How can two services
    establish a dynamic interaction at runtime? How
    can the two services verify/evaluate/monitor/asses
    s the dynamic interaction?
  • Do we need this kind of interoperability?
  • What are the implication of these kinds of
    interoperability?

10
Agenda
  • Overview of Interoperability
  • Data Provenance
  • OASIS CPP/CPA Collaboration
  • Use Scenarios
  • Dynamic Collaboration Protocol (DCP)
  • Verification Framework for DCP
  • Case study
  • Summary

11
Data Provenance (Pedigree) Issue in SOA Systems
  • DoD defines the data provenance (pedigree) as
    followed
  • Ensuring that data are visible, available, and
    usable when needed and where needed to accelerate
    decision-making
  • Tagging of all data (intelligence,
    non-intelligence, raw, and processed) with
    metadata to enable discovery of data by users
  • Posting of all data to shared spaces to provide
    access to all users except when limited by
    security, policy, or regulations
  • Specifically, the questions that the pedigree
    study should answer are
  • If you write a program to provide data services,
    can all potential consumers of the data determine
    the data pedigree (i.e., derivation and quality),
    the security level, and access control level of
    the data?
  • Rationale Question will elicit how a consumer
    can determine data asset quality.
  • In other words, data provenance is defined as
    the quality, including reliability, availability,
    trustworthiness, the derivation, and the process
    how the data is manipulated.

12
Data Provenance (Pedigree) Issue in SOA Systems
  • Groth, Luck and Moreau stated the requirements
    for a data provenance system as
  • Verifiability A provenance system should be able
    to verify a process in terms of the actors (or
    service) involved, their actions, and their
    relationship with one another.
  • Accountability An actor (or service) should be
    accountable for its actions in a process. Thus, a
    provenance system should record in a
    non-reputable manner for any provenance generated
    by a service.
  • Reproducibility A provenance system should be
    able to repeat a process and possibly reproduce a
    process from the provenance stored.
  • Preservation A provenance system should have the
    ability to maintain provenance information for an
    extended period of time. This is essential for
    applications run in DoD enterprise system.
  • Scalability Given the large amounts of data that
    DoD systems, such as GIG, need to handle, a
    provenance system must be scalable.
  • Generality A provenance system should be able to
    record provenance from a variety of applications.
  • Customizability A provenance system should allow
    DoD users to do customizations, such as types of
    provenance, time and events of recording, and the
    granularity of provenance.

13
Data Provenance (Pedigree) Issue in SOA Systems
  • Rajbhandari and Walker stated two additional
    requirements for an SOA-based provenance system
  • A provenance system should be able to collect and
    archive the provenance of the transformation of
    data during data processing by web services.
  • The provenance data should be accessible and
    viewable by web browsers and query interface.
  • Furthermore, provenance in SOA is often
    classified into two granularities
  • Fine-grain provenance This refers to tracing the
    data movement between databases, including where
    the data come from and go to, the rational for
    the data, and the time points of data creation,
    manipulation, and termination.
  • Coarse-grain provenance This refers to data
    generated through processing a work flow.

14
Existing Solution to Data Provenance
  • Protocol for Data Provenance Groth, Luck, and
    Moreau proposed a protocol for recording
    provenance in a service-oriented environment.
    This protocol follows the following steps for
    provenance
  • Negotiation This is the step that both clients
    and their service providers agree on the
    provenance agreement.
  • Invocation Once a provenance agreement is
    reached, a client can start the provenance
    service by invoking it.
  • Provenance Recording This step actually records
    the data communicated in a provenance database.
  • Termination The provenance database has received
    all the relevant data, and it will store the data
    appropriately for future queries.

15
Existing Solution to Data Provenance (cont.)
  • Data Provenance Framework Rajbhandari and Walker
    proposed a provenance system for SOA-based
    system. The basic idea is to incorporate a proxy
    before each consumer (client) and producer
    (server) process the data.
  • In this framework, both consumer and producer can
    be services in an SOA. The proxy will record and
    track all the provenance information about the
    data that are going through the proxy to either a
    client or server.
  • The proxy provides the following services
  • Provenance Collection Service (PCS) This is
    responsible for collecting all the details of the
    transformation of data sets occurred during the
    workflow execution and recording them in a
    provenance database.
  • Provenance Query Service (PQS) This is
    responsible for query processing of the
    provenance database.

Proxy-Based Data Provenance Framework
16
Existing Solution to Data Provenance (cont.)
  • MetaData for Data Provenance Several preliminary
    studies suggested that metadata can play a key
    role in data provenance, however, much research
    is still needed to make metadata for data
    provenance practically usable.
  • While XML meta-model is essential for data
    provenance, having meta-model is only one of the
    necessary ingredients. Many issues need to be
    addressed before data provenance can be
    successfully applied.

17
Issues for Data Provenance
  • Data provenance is a serious problem, and its
    research is just beginning. Most of the proposed
    solutions are exploratory in nature and have not
    even been implemented and experimented. Issues
    such as the performance and quality of data
    provenance have not been addressed at all. The
    overall framework for data provenance is also
    missing. Existing studies addressed a small
    subset of issues related to data provenance.
  • One of the key issues is the data volume. Some
    solutions are rather expensive such as require
    each client and service provider to have a
    PCS/PQS proxy to record all data transmitted. For
    a typical mission-critical application, the
    enormous amount of data can easily exceed the
    capacity of any vast storage systems and hinder
    the normal network operation for the designated
    missions. Furthermore, the data collected will
    continue to increase, as no effective technique
    is available to reduce the amount of data while
    keeping the data provenance requirement.
  • Thus, it is necessary to determine what data and
    what aspects of the data must be tracked. This
    decision can be made at network design time as
    well as during network operation to support agile
    war-fighting.

18
Issues for Data Provenance
  • Another issue related to data volume is data
    classification, reduction, and organization. It
    is necessary to develop policies and algorithms
    to classify data, reduce or eliminate the
    enormous obsolete and useless data that will be
    collected, and organize data for optimal
    performance in data provenance.
  • Data can be at most as dependable as the services
    that process them. Thus, it is necessary to
    evaluate, rank, and label the dependability and
    integrity of services and data that they
    manipulate. This issue has not been addressed in
    the literature. In fact, few papers in the
    literature discussed verification and validation
    (VV) of services in service-oriented
    architecture. It is necessary to develop a formal
    integrity model for data and services in a
    service-oriented environment. Currently no such
    model is available.

19
Approaches to Address the Data Provenance Issues
  • Not all data need to be tracked. For those data
    that need to be tracked, only selected aspects of
    these data need to be tracked.
  • The decision concerning what data and what aspect
    of the data to track can be made at runtime
    during the mission to allow and to provide
    dynamic data provenance for agile commanding,
    decision, and action.
  • A decision-support system is made available for
    operators to decide on demand what data and
    what aspects of data to track, and data priority.
  • Use mature products. It is not necessary to
    re-invent the wheel. For example, many dependable
    database management systems are available today,
    and thus it is not necessary to re-develop a new
    database management system for data provenance.
  • Modify and extend the existing networking
    protocols such as IPv6, XML, and OASIS
    collaboration protocols, so that data provenance
    can be automatically carried with minimum
    overhead. Most of the current data provenance
    protocols can be easily implemented by the
    existing protocols.
  • Develop several data collection strategies
    embedded in IPv6, XML, and OASIS.

20
Approaches to Address the Data Provenance Issues
  • The first task is to develop a data provenance
    classification system. The initial classification
    categories include
  • Maximum provenance This is for priority one
    data. The complete history is needed.
  • Time-based provenance The data provenance should
    be provided for certain time period only. For
    example, during the current mission plus 2 years
    (or 2 weeks).
  • Event-based provenance The data associated with
    a specific event will be tracked.
  • Actor-related provenance The data associated
    with a specific actor (person) will be tracked.
    For example, all the data related to the bank
    manager.
  • Process-based provenance All data related to a
    specific process will be collected once certain
    conditions are activated. For example, a
    terrorist plot process alert will trigger all the
    relevant data about the participants in the
    process will be stored and processed with a high
    priority.
  • Minimum provenance Only certain aspects of data
    will be tracked, e.g., sender, receiver,
    intermediate service names, and time of message
    delivery.
  • No provenance The data essentially have no value
    and can be safely discarded.

21
Approaches to Address the Data Provenance Issues
  • The second task is to develop various kind of
    data collection strategies
  • Actor-based An actor (a system or service) can
    have a monitoring service or agent that tracks
    the data communicated. This is the approach
    adopted by current solutions.
  • Message-based Instead of depending on a
    monitoring agent to collect data, the data may
    carry its own provenance, e.g., in its XML file.
    Each service that uses the XML file can leave a
    trace in the file so that data provenance can be
    tracked.
  • In the message-based approach, it is necessary to
    apply digital signatures to ensure that no other
    services or systems can overwrite data during the
    subsequent communications.
  • In both cases, it is necessary to design an
    intelligent algorithm to allow a key person,
    e.g., the commander, to make decisions to what
    data to track and what aspects of data will be
    tracked.
  • To reduce the length of data file, each service
    can have an intelligent agent to reduce data
    history in an intelligent manner, i.e., useless
    data will be removed or summarized. Generally
    speaking, data should be ranked according to
    their criticality. The access history to more
    critical data should be stored with a high
    priority.

22
Approaches to Address the Data Provenance Issues
  • The third task is to define the data integrity
    level the higher the level, the more confidence,
    data integrity refers to
  • a program will execute correctly, and
  • data is accurate and / or reliable
  • Each service will have an integrity level, and
    data produced by a high integrity service can go
    into low-integrity service, but not the other
    way. This is opposite to the security management,
    where a low security-level data can go to a high
    security-level process, but a high security-level
    data cannot go to a low security-level process.
  • Similarly, a service should also be ranked
    according to the data provenance and other
    criteria in the service-oriented environment.
    This ranking mechanism helps identifying the data
    provenance violation situations. Specifically, if
    a Low ? High data flow appears, the data
    provenance VV engine should be called to decide
    whether incorrect or unreliable data appear. The
    service integrity can be ranked by its
    performance under the CVV (Collaborative
    Verification and Validation).

23
Approaches to Address the Data Provenance Issues
  • Service integrity level can be determined by VV
    techniques on the participating services. A
    faulty service naturally has low integrity, and
    vice versa.
  • If a service is ranked according to its integrity
    in handling data, it is possible to use Clark
    Wilson Model for integrity checking. In this
    model, each service is ranked, and data are also
    ranked, the integrity can be checked by following
    the data flow and by applying operators in the
    model.
  • One problem of applying Biba model and Clark
    Wilson model is that there are many services, and
    data are dynamic and of high volume. Thus, the
    complexity of tracking data integrity via the
    maze of the service-oriented environment will
    still be expensive.

24
Agenda
  • Overview of Interoperability
  • Data Provenance
  • OASIS CPP/CPA Collaboration
  • Use Scenarios
  • Dynamic Collaboration Protocol (DCP)
  • Verification Framework for DCP
  • Case study
  • Summary

25
OASIS CPP/CPA Collaboration
  • Collaboration is an association, partnership, or
    agreement between two legal entities to conduct
    business. OASIS
  • A collaboration likely results in at least one
    business process being implied between entities.
  • Collaboration over the World Wide Web is a broad
    area of research involving wide-reaching issues
    W3C
  • Knowledge representation
  • Annotation of objects
  • Notification and
  • Any other issues which arise in the creation of
    shared information systems and collaborative
    development.

26
Collaboration Protocols
  • To facilitate the process of conducting
    collaboration, potential collaborative services
    need a mechanism to publish information about the
    collaboration they support.
  • This is accomplished through the use of a
    Collaboration Protocol Profile (CPP).
  • CPP is a document which allows a service to
    describe their supported collaboration process
    and service interface requirements in a manner
    where they can be universally understood by other
    services.
  • A special collaboration agreement called a
    Collaboration Protocol Agreement (CPA) is derived
    from the intersection of two or more CPPs.
  • CPA serves as a formal handshake between two or
    more collaborative services wishing to
    collaborate with each other to achieve a mission.
  • OASIS and UN/CEFACT sponsored ebXML as a global
    electronic business standard.
  • Its Collaboration Protocol Profile (CPP) and
    Collaboration Protocol Agreement (CPA) are used
    to specify trading partners technical
    information for doing e-business.

27
OASIS ebXML CPP/CPA specification
http//www.ebxml.org/specs/ebcpp-2.0.pdf
  • As defined in the ebXML Business Process
    Specification SchemaebBPSS, a Business Partner
    is an entity that engages in Business
    Transactions with another Business Partner(s).
  • Collaboration-Protocol Profile (CPP) describes
    the Message-exchange capabilities of a Party.
  • Collaboration-Protocol Agreement (CPA) describes
    the Message-exchange agreement between two
    Parties.
  • A CPA MAY be created by computing the
    intersection of the two Partners' CPPs.
  • Included in the CPP and CPA are details of
    transport, messaging, security constraints, and
    bindings to a Business-Process-Specification (or,
    for short, Process-Specification) document that
    contains the definition of the interactions
    between the two Parties while engaging in a
    specified electronic Business Collaboration.

28
OASIS ebXML CPP/CPA specification
http//www.ebxml.org/specs/ebcpp-2.0.pdf
  • The objective of the OASIS ebXML CPP/CPA
    specification is to ensure interoperability
    between two Parties even though they MAY procure
    application software and run-time support
    software from different vendors.
  • Both Parties SHALL use identical copies of the
    CPA to configure their run-time systems. This
    assures that they are compatibly configured to
    exchange Messages whether or not they have
    obtained their run-time systems from the same
    vendor.
  • The configuration process MAY be automated by
    means of a suitable tool that reads the CPA and
    performs the configuration process.

29
OASIS Business Process Modeling
  • A user creates a complete business process and
    information Model
  • User creates an ebBP specification by extracting
    and formatting the nominal set of elements
    necessary to configure an ebXML runtime system
    based on the model.
  • The ebBP specification becomes the input to the
    formation of ebXML trading partner CPP and CPA.
  • CPP and CPA in turn serve as configuration files
    for BSI (Business Service Interface) software
    component.

30
Overview of ebXML Process Specification
A Business Collaboration Process Specification
consists of a set of roles collaborating through
a set of choreographed Business Transactions by
exchanging Business Documents.
31
Relation Between CPP Process Specification
CPP uses ltPartyInfogt element to reference the
corresponding Process Specification
32
Overview of Collaboration-Protocol Profiles (CPP)
Party A tabulates the information to be placed in
a repository for the discovery process,
constructs a CPP that contains this information,
and enters it into an ebXML Registry or similar
repository along with additional information
about the Party. The additional information might
include a description of the Businesses that the
Party engages in. Once Party A's information is
in the repository, other Parties can discover
Party A by using the repository's discovery
services.
33
Overview of Collaboration-Protocol Profiles (CPP)
Layered architecture of CPP specification
The ProcessSpecification, DeliveryChannel,
DocExchange, and Transport elements of the CPP
describe the processing of a unit of Business
(conversation). These elements form a layered
structure somewhat analogous to a layered
communication model.
34
Overview of Collaboration-Protocol Agreement (CPA)
Party A and Party B use their CPPs to jointly
construct a single copy of a CPA by calculating
the intersection of the information in their
CPPs. The resulting CPA defines how the two
Parties will behave in performing their Business
Collaboration.
35
Working Architecture of CPP/CPA with ebXML
Registry
36
Limitation of CPP/CPA
  • Focus on message exchange only.
  • Thus, any issues related to workflow will not be
    addressed at all.
  • Common issues related to workflow during
    collaboration is
  • Deadlock Each party is waiting for the other
    party to reply.
  • Starvation The other party fails to respond, and
    the party waiting for the data is starving.
  • Synchronization both parties need to synchronize
    with each other.
  • Transaction all or nothing.
  • Missing A message sent originally and received.
    But the message was discarded after a receiver
    rollback, and needs the message to be re-sent,
    but the sender never rolls back.
  • Orphan A message has been sent, but the sending
    party rolls back, and this message becomes
    invalid, but it has been sent and received.

37
Limitation of CPP/CPA
Deadlock
Starving
38
OASIS CPP/CPA Example
Example of CPP Document
http//www.oasis-open.org/committees/ebxml-cppa/sc
hema/cpp-example-companyA-2_0b.xml
http//www.oasis-open.org/committees/ebxml-cppa/sc
hema/cpp-example-companyB-2_0b.xml
Example of CPA Document
http//www.oasis-open.org/committees/ebxml-cppa/sc
hema/cpa-example-2_0b.xml
http//www.oasis-open.org/committees/ebxml-cppa/sc
hema/cpp-cpa-2_0b.xsd
39
Agenda
  • Overview of Interoperability
  • Data Provenance
  • OASIS CPP/CPA Collaboration
  • Use Scenarios
  • Dynamic Collaboration Protocol (DCP)
  • Verification Framework for DCP
  • Case study
  • Summary

40
Use Scenarios
  • Use scenarios
  • Is designed for service interoperability
    specification
  • It specifies how a service or system is used by
    other services or systems.
  • It focuses on the work flow part of the service
    interoperability.
  • It defines how a particular function can be used
    in a stepwise fashion.
  • Use Scenario vs. Process Specification
  • A process specification describes the behavior of
    a system when the system is activated with a
    specific input.
  • A use scenario describes a possible sequence of
    actions to activate a service provided by the
    system.

41
Use Scenario Analyses
  • With the use scenario specified, we can perform
  • Automated interoperation generation
  • Interoperation correctness checking
  • Interoperability cross checking
  • With the support of the analytic techniques
    mentioned above, users can verify the correctness
    of use scenario.
  • This can further enhance the interoperability of
    systems.

42
System Service Specification
  • For different systems to be interoperable with
    each other, system's service specification needs
    to conform to a common standard.
  • Services designed using the same service
    specification language can have a higher level of
    interoperability.
  • System service specification is a system profile
    which provides information of what the system is.
    The profile includes following information
  • Interface Specification
  • Describes the calling parameters and return
    values of the system.
  • The ACDATE model in E2E automation provides the
    capability for interface specification
  • System Scenario Use Scenario
  • Describe how the system works and how to work
    with this system.

43
ACDATE / Process Overview
  • The ACDATE (Actors, Conditions, Data, Actions,
    aTtributes, Events) modeling specification
  • A language for modeling and specification in the
    domain of system engineering and software
    engineering.
  • It facilitates the specification, analysis,
    simulation, and execution of the requirement and
    therefore the system.
  • A Process is a semi-formal description of system
    functionality
  • It is a sequence of events expected during
    operation of system products which includes the
    environment conditions and usage rates as well as
    expected stimuli (inputs) and response (outputs).
  • ACDATE entities are the building blocks for
    Process specification.
  • After ones system requirements have been
    decomposed into ACDATE entities, one can then
    specify Processes.
  • This ACDATE/Process model allows for system
    modeling and provides the capability to perform
    various analyses of requirement VV.

44
Use Scenarios vs. System Scenarios
  • A system scenario describes the behavior of a
    system when the system is activated with a
    specific input,
  • A use scenario describes a possible sequence of
    actions to activate a service provided by the
    system.
  • The use scenario, once specified, can greatly
    reduce the time needed for C2 systems to
    collaborate by properly calling each other in the
    specified order.

45
Use Scenario Specification -- syntax semantics
  • structural constructs
  • choice option option option
  • choice means that the interoperation can select
    any single sub-process (listed as options) to
    continue the control flow.
  • precond
  • precond indicates the preconditions before a
    particular action
  • postcond
  • postcond indicate the postconditions after a
    particular action
  • criticalreg
  • criticalreg indicate a critical region such that
    no other actions can take place to interrupt the
    execution of actions within the critical region.
    Any action sequence outside a critical region can
    be intervened by any sub-process.
  • ltgt
  • Any entities enclosed by ltgt are parameter
    entities.
  • With sub-processes, the use scenario can describe
    the interoperation of hierarchical systems in
    different levels.

46
Use Scenario Example
  • This example use scenario is specified for a
    control system that is in charge of battle tank
    in a simple C2 system
  • The control system, Tank, which has 5 functions
  • Start
  • Move
  • LocateTarget
  • Fire
  • Stop
  • The control system, BattleControl, which has 1
    function
  • OrderToFire
  • The control system, Security, which has 1
    function
  • VerifyPassword

47
Use Scenario Example (Cont.)
  • Use scenario for Tank
  • do ACTIONTank.Start
  • choice
  • option
  • do ACTIONTank.Move
  • option
  • do ACTIONTank.LocateTarget
  • option
  • do ACTIONTank.Fire
  • do ACTION Tank.Stop

48
Automated Interoperation Generation
  • If more than one systems specified with use
    scenarios are to be put together to compose a
    complex system, the interoperation can be
    generated by intervene the use scenario for
    individual systems.

49
Automated Interoperation Generation -- Example
  • Automated generated interoperation
  • lt Start, Move, Stopgt,
  • lt Start, LocateTarget, Stop gt, and
  • ltStart, Fire, Stop gt.
  • When interoperate with BattleControl, following
    interoperation can be generated
  • lt Start, BattleControl.OrderToFire, LocateTarget,
    Stop gt
  • lt Start, LocateTarget, BattleControl.OrderToFire,
    Stop gt

50
Interoperation Correctness Checking
  • There will be quite a lot of interoperation can
    be generated or specified by intervene the
    individual use scenarios for different systems.
  • But not all generated interoperation are correct
    sequence according to the constraints specified.
  • By the constraints checking we can identify the
    interoperation that do not satisfy the
    constraints.
  • precondition checking
  • postcondition checking and
  • critical region checking.

51
Updated Use Scenario Example
  • The use scenario is updated by adding a Critical
    Region .
  • Updated use scenario for Tank
  • criticalreg
  • do ACTIONTank.Start
  • choice
  • option
  • do ACTIONTank.Move
  • option
  • do ACTIONTank.LocateTarget
  • option
  • do ACTIONTank.Fire
  • do ACTION Tank.Stop

52
Interoperation Correctness Checking -- Example
  • With the Critical Region constraint specified for
    the Tank use scenarios, not all interoperation
    are correct.
  • Interoperation for Tank and BattleControl
  • lt Start, LocateTarget, BattleControl.OrderToFire,
    Stop gt is a correct interoperation.
  • lt Start, BattleControl.OrderToFire, LocateTarget,
    Stop gt is NOT a correct interoperation.
  • BattleControl.OrderToFire can not be put in the
    section tagged as criticalreg.

53
Interoperability Cross Checking
  • The constraints may be specified in different use
    scenarios.
  • If one wants to put the systems together, the
    interoperability cross checking needs to be done
    to identify potential inconsistencies.

54
Updated Use Scenario Example
  • The use scenario is updated by adding
    Preconditions .
  • Updated use scenario for Tank
  • do ACTIONTank.Start
  • choice
  • option
  • do ACTIONltSecuritygt.VerifyPassword precond
  • do ACTIONTank.Move
  • option
  • do ACTIONltSecuritygt.VerifyPassword precond
  • do ACTIONTank.LocateTarget
  • option
  • do ACTIONltSecuritygt.VerifyPassword precond
  • do ACTIONTank.Fire
  • do ACTION Tank.Stop

55
Use Scenario Example (Cont.)
  • The use scenario for security control system is
  • do ACTIONSecurity.VerifyPassword postcond
  • do ACTIONltTankgt.Move
  • do ACTIONltTankgt.Fire

56
Interoperability Cross Checking -- Example
  • In the use scenarios specified above, system Tank
    requires verifying password before all following
    operations on the Tank
  • Move
  • LocateTarget
  • Fire
  • Security enables Fire and Move after verifying
    password, without mentioning LocateTarget.
  • A cross checking shows a potential inconsistency,
    which is not necessary an error.
  • Either Account enforces an unnecessary strong
    precondition on LocateTarget,
  • Or Security enables an insufficient weak
    postcondition on VerifyPassword.

57
Extended Use Scenario
  • Use scenarios are useful for efficient system
    composition. Yet, additional information can be
    added to use scenario to improve the system's
    selection and composition effectiveness and
    scalability.
  • The following information can be added
  • Dependency information
  • Categorization and
  • Hierarchical use scenarios.

58
Dependency Information
  • In addition to the information specified in use
    scenarios for how to use the given system, it is
    useful to add dependency information.
  • Dependencies Specification
  • Describes other systems that need to be included
    for this system to function. Compatible
    components list
  • Compatible components list
  • A list of other systems that are known to be able
    to work with the system.
  • With this list, the system composition and
    re-composition can be done more efficiently.

59
Dependency Information -- Example
  • For an aircraft carrier
  • Dependencies Destroyer, Frigate, and Submarine.
  • Compatible components Helicopter, Fighter plane,
    and Scout.
  • With the information specified above, the
    composition process will be greatly eased.
  • When putting an aircraft carrier into a SOA
    system, users will know that the destroyer,
    frigate and submarine are also needed.
  • From information above, the users will know it is
    compatible to put helicopters, fighter planes,
    and scouts on the aircraft carrier but not the
    battle tanks.

60
Categorization
  • For better organization, the use scenarios need
    to be categorized since
  • A system can provide multiple services.
  • Different services provided by the system may
    have different use scenarios.
  • A system working with different systems may have
    different use scenarios.
  • A set of use scenarios describing the usage of
    one specific service provided by this system can
    be put into the same category.
  • Each system can be assigned with a category tree
    of use scenarios.

61
Categorization -- Example
  • In an SOA system, there is usually a command
    center which controls the overall battle.
  • Since multiple units, say Fleet 1, Fleet 2, and
    Fleet 3, are all involved in the battle, the
    command center needs to coordinate the battle and
    provides services for the Fleets, respectively.
  • To better organize the design, the use scenarios
    must be categorized accordingly.
  • Use scenarios for Fleet1
  • Use scenarios for Fleet2
  • Use scenarios for Fleet3

62
Hierarchical Use Scenario
  • Use scenario can be hierarchical.
  • A higher level use scenario can call lower level
    use scenarios.
  • A higher level use scenario may specify the use
    of more than one subsystem.
  • The high level use scenario specifies the overall
    process and can be broken down into several low
    level use scenarios by scenario slicing.

63
System Composition
  • Complex mission often requires collaboration
    among multiple participating systems.
  • Each participating system (subsystem) in a
    complex system (system of systems) focuses on
    handling one aspect of the overall mission.
  • It is important for each subsystem to be
    specified with system scenarios as well as use
    scenarios.

64
System Composition Approach
  • The bottom-up approach is more efficient to build
    a new composite system once the system scenarios
    and use scenarios are known.
  • With system scenarios, multiple analyses
    (dependency analysis, CC analysis, event
    analysis, simulation, model checking) can be done
    to evaluate the system.
  • Automated system testing and verification with
    verification patterns can provide us with
    confidence of the quality assurance of the
    selected system.
  • Once we have verified and validated the
    individual subsystems, we can build complex
    system on top of them.

65
System Composition Approach (Cont.)
  • The system discovery and selection can be done by
    analyzing the system scenarios.
  • Compose the individual subsystems into the
    complex system we need by connecting the systems
    according to the use scenarios.
  • If a use scenario calls the use scenarios of
    subsystems, it specifies the interoperation among
    several different subsystems.
  • In this case, the use scenarios play the role of
    system composition pattern.

66
System Composition Example
67
System Composition Example (Cont.)
  • The System 1 composition figure in last page
    shows the composition information of a complex
    system 1 with three subsystems.
  • 3 subsystems for system 1
  • System A,
  • System B, and
  • System C.
  • Each system is specified with system scenarios
    and use scenarios.
  • System scenarios for each subsystem provide
    information on what services this system
    provides.
  • Each subsystem is specified with use scenarios,
    the integration becomes possible.
  • If we have interface information only for systems
    A, B, and C, we may not obtain the
    functionalities required by system 1 because we
    do not know how to call the interfaces of each
    subsystem.

68
System Composition Template
  • A use scenario can have templates.
  • In such a template, we may specify what
    functionalities we need.
  • If a system provides the functionalities
    specified in the system scenario, it can be used
    as subsystem.
  • In the system 1 composition figure
  • There are three subsystems, system A.1, A.2, A.3
    provides the same functionalities.
  • Each of these subsystems can be a valid candidate
    for the composition.
  • These subsystems may be ranked with different
    criteria by the automated testing tools.
  • Appropriate subsystem can be added to the
    composition system according to different system
    performance requirements.
  • What system to be chosen will be decided at the
    system run-time (Dynamic Binding).

69
Agenda
  • Overview of Interoperability
  • Data Provenance
  • OASIS CPP/CPA Collaboration
  • Use Scenarios
  • Dynamic Collaboration Protocol (DCP)
  • Verification Framework for DCP
  • Case study
  • Summary

70
Process Collaboration
  • Process collaboration is a higher-level
    collaboration than data collaborations.
  • It requires higher interoperability of
    collaborative partners.
  • It is more efficient, more adaptive and more
    dynamic than data collaboration.
  • The development of process collaboration is shown
    as following

71
Dynamic Process Collaboration
  • Dynamic allows different services to interoperate
    with each other at system runtime in SOA
  • The Dynamic on-demand service-oriented
    collaboration architecture is a flexible
    architecture
  • Application architecture, collaboration protocols
    and individual services can be dynamically
    selected and reconfigured to fit the new
    application environment.
  • Dynamic collaboration includes
  • A collaboration specification language based on
    PSML-S
  • A set of on-demand process collaboration
    protocols by extending ebXML collaboration
    protocols and
  • A collaboration framework based on CCSOA
    framework.

72
Service Specification
  • To achieve interoperability through network, the
    services need to be modeled and annotated in a
    standard formal/semi-formal specification
  • Different services can understand each other and
    communicate with each other.
  • Service specification is a very important part
    for effective service collaboration and
    composition.

73
Service Specification Evolution
  • Initial stage (interface stage)
  • Service descriptions mainly contain the
    input/output information, such as function name,
    return type, and parameter types.
  • Process description stage
  • Service specifications contain an abstract
    process description in addition to interface
    information.
  • Service collaboration specification stage
  • The service specifications not only contain
    everything in the previous two stages, but also
    service collaboration protocols such as
    collaboration protocol establishment,
    collaboration protocol profiles, collaboration
    constraints, and patterns.
  • WS-CDL
  • WSFL
  • Consumer-centric specification stage
  • The entire application can be specified,
    published, searched, discovered, and composed.

74
PSML-C Service Specification
  • PSML-C is designed to support the specification,
    modeling, analysis, and simulation of service
    process collaboration based on PSML-S.
  • The PSML-C specification includes
  • Process Specification
  • Constraint Specification
  • Interface Specification and
  • Collaboration Specification.

75
PSML-C Collaboration Specification
  • Collaboration specification consists of
  • A set of extended CPPs (ECPP)
  • A set of use scenarios and
  • A set of predefined extended CPAs (ECPA).
  • Each ECPP
  • Describes one collaboration capability of the
    service
  • Is associated with one use scenario that provides
    information on how the other services can use the
    interfaces published for this specific
    collaboration capability.
  • Each ECPP references to a process specification
    as the internal control logic for this specific
    collaboration capability.
  • The ECPA set which contains predefined ECPAs
    that have been used and evaluated in the
    previously carried out service collaboration for
    rapid collaboration establishment.

76
PSML-C Process Collaboration Protocols
  • PSML-C Process Collaboration Protocol is derived
    from the ebXML CPP and CPA.
  • ebXML CPP CPA primarily focus on the message
    exchange capabilities of collaborating services.
  • They provide general information on collaboration
    in E-Business domain.
  • But they lack of information on process
    collaboration.
  • PSML-C, based on PSML-S, has a process oriented
    modeling language which provides rich information
    on process specification.
  • This approach extends the ebXML CPP and CPA by
    adding more process related information to the
    service collaboration specification.
  • Extended CPP and CPA specify more information on
    process collaboration and system workflow.

77
PSML-C Process Collaboration Protocols (Cont.)
  • PSML-C Collaboration Protocol consists of
  • Extended Collaboration Protocol Profile (ECPP)
    Describes the collaboration capabilities of the
    service.
  • Extended Collaboration Protocol Agreement (ECPA)
    Describes the collaboration interactions among
    the collaborative services.
  • Use Scenario Describes how to carry out a
    specific collaboration with respect to a given
    ECPP.
  • Process Specification Reference References to
    the internal control logic of each service

78
Extended CPP (ECPP)
  • An ECPP is a quadruple of (START, END, IN, OUT)
    where
  • START is the set of start points where START ?
    ø.
  • A collection of entry points to the process
    specification where the process begins to
    execute.
  • The START set is a non-null set.
  • END is the set of end points where END ? ø.
  • A collection of exit points of the process
    specification where the process finishes
    executing.
  • The END set is a non-null set.
  • IN is the set of incoming collaboration points.
  • A collection of collaboration points of the
    process specification that can take incoming
    events.
  • The IN set specifies what Actions in the process
    specification can be triggered by incoming Events
    from other services.
  • An in collaboration point will be further mapped
    to an in type interface of the service in the
    collaboration agreement.
  • OUT is the set of outgoing collaboration points.
  • A collection of collaboration points of the
    process specification that can issue outgoing
    events.
  • The OUT set specifies what Actions in the process
    specification can issue outgoing Events to invoke
    other services.
  • An out collaboration point will be further mapped
    to an out type interface of the service in the
    collaboration agreement.

79
Extended CPP (Cont.)
  • Following constraints are highly recommended but
    not required to the ECPP specification to
    describe a well-formed collaboration model
  • CardSTART 1 ? (CardIN gt 0 ? CardOUT gt 0)
  • An example of an ECPP specification for a simple
    online banking service is
  • START Login
  • END Logout
  • IN ApproveLoan
  • OUT CreditCheck

80
Extended CPA (ECPA)
  • PSML-C adds collaboration transaction to the
    ebXML CPA.
  • When two processes need to collaborate with each
    other, they need to negotiate with each other to
    specify possible/valid collaboration transaction.
  • An ECPA contains a set CT which specifies a
    collection of collaboration transactions.
  • A collaboration transaction is a pair of (out,
    in) where in ? IN1 and out ? OUT2 and IN1 is the
    IN set and OUT2 is the OUT set of collaboration
    services respectively.

81
Extended CPA (Cont.)
  • For two services to be able to collaborate, the
    set CT must be a non-null set.
  • An example of an ECPA specification between a
    simple online banking service and a credit
    service is
  • CT (CreditCheck, ProcessCreditChecking),
    (CreditOK, ApproveLoan)
  • where
  • CreditCheck an out collaboration point of online
    banking service
  • ApproveLoan an in collaboration point of online
    banking service
  • ProcessCreditChecking an in collaboration point
    of credit service and
  • CreditOK an out collaboration point of credit
    service.

82
LTL Use Scenario Specification
  • The Use Scenario specification also uses the
    Linear Temporal Logic (LTL) syntax and semantics
    to specify the usage pattern of the service with
    respect to a specific collaboration. As a result,
    a use scenario specification is built up from a
    set of
  • proposition variables p1,p2,...
  • the usual logic connectives ?, ?, ?, and ? and
  • the temporal modal operators
  • N (for next)
  • G (for always)
  • F (for eventually)
  • U (for until), and
  • R (for release).
  • An example of a use scenario specification for
    online banking service is
  • Login F Logout
  • which means any login operation must finally
    followed by a logout operation.

83
Process Specification Reference
  • Process specification reference is a reference to
    the process specification which specifies the
    internal control logic associated with an ECPP.
  • With this reference information, service matching
    technologies can verify both the interface
    information as well as the internal process
    specification.
  • An example of process specification reference of
    a simple online banking service is that the
    credit checking collaboration references to the
    auto loan application process specification.

84
PSML-C Composition Collaboration Framework
85
PSML-C Composition Collaboration Framework
(Cont.)
  • The service broker stores not only service
    specifications, but also application templates
    and collaboration patterns.
  • Application builders publish the application
    templates in the service broker.
  • Service providers can subscribe to the
    application registry.
  • The pattern repository stores different types of
    collaboration patterns including architecture
    patterns, process patterns, and constraint
    patterns.
  • The Service Discovery Matching Agent (SDMA)
    discovers and matches not only the services but
    also the application templates.
  • The Service Verification Validation Agent
    (SVVA) supports the verification and validation
    on both individual services and the composite
    service collaboration with CVV (Collaborative
    Verification Validation) technologies.

86
Collaboration Patterns
  • Collaboration Patterns are repeatable techniques
    used by collaborative services to facilitate
    rapid and adaptive service composition and
    collaboration.
  • Reduce effort for composition
  • Reuse previously identified collaboration
  • Compatible with the categorization in PSML-S
    specification and modeling process.
  • Architecture Patterns
  • Process (behavioral) Patterns
  • Constraint Patterns
  • When compositing a new service, we follow the
    order below
  • Architecture -gt Process -gt Constraint

87
Application Composition Process
  • When building a new service-oriented application
    which is a composite service consisting of
    multiple collaborative services, the service
    composition and collaboration process can be
    described as following
  • Application builder submit a request to the
    application repository to retrieve an appropriate
    application template
  • The application builder decides which
    architecture pattern needs to be applied to build
    the application
  • Only if the architecture of the application has
    been identified, the application builder
    identifies what process patterns can be applied
  • Then, the constraint pattern may be applied to
    the application based on the architectural and
    behavioral design
  • Application builder retrieves different services
    from the service pool according to the
    application template subscription information
    provided by the service broker
  • The services will be tested against different
    acceptance criteria provided by the application
    builder for service selection
  • Application builder simulates the composed
    service to evaluate the overall performance
  • Application builder integrate the selected
    services and deploy the application.

88
PSML-C Collaboration Architecture
89
PSML-C Collaboration Architecture (Cont.)
  • For collaboration, each service will be specified
    with
  • Process specification
  • Interface specification
  • Collaboration specification
  • Before each service can be registered to the
    repository, it will be verified and validated by
    multiple static analyses including simulation.
  • The collaboration protocol profiles will be
    registered in the registry and stored in the
    repository for discovery and matching.
  • Once the collaboration is established, dynamic
    analyses can be performed to evaluate the runtime
    behaviors of the service collaboration.

90
PSML-C Collaboration Phases
  • Collaboration Preparation
  • Service specification
  • Static analyses
  • Service registration
  • Application template registration
  • Collaboration Establishment
  • Service discovery and matching
  • Service ranking
  • Collaboration agreement negotiation
  • Dynamic simulation
  • Collaboration Execution
  • Policy enforcement
  • Dynamic reconfiguration and recomposition
  • Dynamic monitoring and profiling
  • Collaboration Termination
  • Collaboration verification
  • Collaboration evaluation

91
Agenda
  • Overview of Interoperability
  • Data Provenance
  • OASIS CPP/CPA Collaboration
  • Use Scenarios
  • Dynamic Collaborati
Write a Comment
User Comments (0)
About PowerShow.com