Traditional architecture - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

Traditional architecture

Description:

Sheet: 1. Traditional architecture. Message passing, client ... all state data. state. data. P. new C. platform i. platform k. platform i. platform i. state ... – PowerPoint PPT presentation

Number of Views:918
Avg rating:3.0/5.0
Slides: 66
Provided by: maarten91
Category:

less

Transcript and Presenter's Notes

Title: Traditional architecture


1
Traditional architecture
  • Message passing, client/server, object oriented
  • hides inter-process data
  • mixes function and interaction
  • Independence needs to be designed separately for
    each interface
  • functionality
  • temporal behavior
  • availability
  • distribution
  • extensibility
  • huge effort - essentially similar each time
  • error prone - hard-to-detect errors

2
Independence in a normalized environment
3
(No Transcript)
4
Normalised environment
  • Needs to be designed only once
  • for an entire class of systems
  • architecture guarantees system properties
  • Shifts complexity from application to computing
    environment
  • improves quality
  • significantly reduces system design effort
  • Handles inter-process data
  • visibility of data
  • separation of component function and component
    interaction

5
Splice
  • Deceptively simple, yet powerful concept
  • major shift of application complexity to
    middleware
  • has proven to greatly simplify system design
  • radically changed the ratio between design and
    integration effort
  • Integration oriented
  • components
  • legacy systems
  • incremental deployment
  • interoperability
  • different operational modes

6
Splice
  • Highly adaptive
  • future proof
  • component re-use
  • robust
  • scalable
  • Paradigm shift
  • simple concept, thus small learning effort
  • bridges the gap between design and implementation

7
Shared Data Space
  • Common data repository
  • Autonomous applications
  • Interaction only through SDS
  • Only deals with appl. program interaction
  • Does not hide O.S.

8
Dataspace
  • System statespace partitioned into sorts
  • A sort is represented as a lttag,valuegt pair
  • Access to data through tags and queries

9
Conceptual architecture
10
Design consequences
  • Assume a traffic management application
  • Detection loops in road surface
  • Control station
  • Visual signals for drivers
  • One process collects data from detection loops
  • Another process decides on signals
  • A third process interacts with road manager

collect
decide
interact
situation
request
parameters
traffic
11
  • Collect
  • suppose collect buffers all measurements (using
    DMA) until requested to send
  • while (true)
  • if ( get_data(request)) TRUE)
    put_data(traffic)
  • sleep(n)
  • Decide
  • decide requests a batch of measurements, then
    uses the current parametersettings to compute the
    signals to be set
  • while (true)
  • put_data(request)
  • if ( get_data(traffic) TRUE)
  • get_data(parameters)
  • compute_situation
  • put_data(signals)
  • put_data(situation)
  • sleep(n)

12
Problems addressed
  • Processes are autonomous
  • failure of a process may result in starvation,
    but not in deadlock
  • processes can be added/removed
  • Distribution not yet solved
  • Scalability not addressed
  • recovery not handled

13
Distribution
  • Data allocated in central dataspace
  • bottleneck for access
  • vulnerable for failures

14
Distribution
  • Data uniquely allocated in distributed dataspace
  • remote access latency problem
  • vulnerable for failures

15
Distribution
  • Data replicated in distributed dataspace
  • shortest possible latencies
  • consistency problem

Shared Data Space
Platform 1
Platform 2
Platform n
16
Design consequences
  • Extension of traffic management application
  • possibility of inconsistent behavior

access
traffic
merge
collect
decide
interact
parameters
request
situation
traffic
17
Persistence
  • Independent application processes
  • only task is to ensure availability of data
  • can be replicated, since no operational output
  • Special service for restoring lost data
  • Solution for fail-stop failures

cache coupled to process
Shared Data Space
Data survives appl. or machine crash when
duplicated on multiple machines
18
Persistence
Shared Data Space
Data survives appl. or machine crash
Data survives system crash
19
Fault tolerance
  • Cold stand-by (passive replication) use
    semi-persistence
  • Hot stand-by (semi-active replication)
  • process subscribes to all sorts, then waits for
    signal that it may start
  • a process manager activates one of possibly
    severalstandby processes if there is no active
    process
  • Active replication
  • currently, application developers problemwe
    have a theoretical solution

20
Passive replication
C
P
?
?
platform k
platform i
21
Passive replication
No history!
P
new C
?
?
platform k
platform i
22
Passive replication
state data
state data
C
P
all state data
all state data
?
?
?
?
platform k
platform i
23
Passive replication
state data
state data
P
new C
all state data
?
?
?
?
platform k
platform i
platform i
24
Passive replication
state data
state data
P
new C
all state data
?
?
?
?
platform k
platform i
platform i
platform i
25
Start-up sequence
initialize Splice
26
Semi active replication
A
Shared Data Space
27
Engineering
  • Hot standby
  • need reliable fault detector - hard problem
  • selection of replica to become active process
    after failure
  • process must be engineered for this feature
  • main
  • subscribe(a)
  • ...
  • subscribe(z)
  • wait_for_activation() -- only one will be
    activate at any time
  • while true
  • ...

28
Semi-active replication
C
C
P
?
?
?
platform k
platform i
29
Semi-active replication
C
P
?
?
platform k
platform i
30
Scoping (worlds)
A
A
31
Engineering
  • Scalability
  • without data visibility restriction, processing
    requirements can become overwhelming
  • flat namespace gives configuration problem
  • Component isolation
  • identical subsystems may have same names
  • Different operational modes
  • training
  • simulation
  • testing / maintenance
  • operational

32
Worlds and subsystems
x,y,z
x,y,z
v,x,z
v,x,z
z
subsystem 1
subsystem 0
z
system
33
Worlds and scopes
Dynamic scopes
34
Worlds and scopes
Variable grid dynamic scopes
35
Scoping (privacy)
A
36
Engineering
  • Automatic selection of available resource
  • uses standard mechanism for requesting service
  • service parameters allow discrimination between
    requests
  • Once a private connection is established uses
    standard mechanism
  • data sorts defined during negotiation
  • partition of dataspace unaccessible for others
  • Connection can be terminated by either
    participating process

37
Shared access
A
38
Implementation
  • Shared dataspace consists of
  • connectivity administration
  • using dynamic (lazy) binding
  • publish/subscribe-based communication mechanism
  • data management facilities
  • configurable per subscription
  • system management facilities

39
Distribution
  • Initial situation

appl
appl
shared-data space
herald i
herald j
multicast / broadcast
40
Distribution
Declaration of intent
appl
appl
subscribe(?,db)
publish(?)
shared-data space
????j
??
herald i
herald j
need for data of sort ?
availability of data of sort ?
41
Distribution
  • Write operation

appl
appl
write(?,v)
shared-data space
????j
??
herald i
herald j
forward data
42
Distribution
  • Read operation

appl
appl
X read(?,q)
shared-data space
? j
??
V
herald i
herald j
43
Distribution
Write operation (detailed)
write(?,v)
?? , QoS ??j
lta,v0,t0gt, lta,v1 ,t1 gt, ..., lta,vn ,tn gt
t lt t0 timeout n buffersize
herald i
forward data using specified QoS
44
Distribution
  • Read operation
  • Several storage modules available
  • default practical compromise between speed and
    sophistication
  • queue
  • history ordered by application-defined
    time-stamp
  • single-place buffer
  • Application may specify own repository
  • Wake-up or polling choice

45
SPLICE
  • Datamodel
  • Sorts correspond to C structs or Ada records
  • Index may be defined
  • Sorts may be associated with multiple categories
  • Instances can be defined in different subspaces
    (worlds)

46
SPLICE
  • Subscription
  • per datasort, or
  • can be defined for a group of sorts, or
  • may be specified for sorts in a category, or
  • all sorts in a group of worlds
  • Datamanagement controlled by application
  • Data transfer mode application defined
  • Rich query language (depending on database used)
  • Filters for data-dependent subscription

47
Multi-sort subscription
corresponding instances are assembled, based on
common key (natural join) default values for
missing instances may be defined
48
Data categories
  • User defined
  • Typical usage
  • persistent data
  • starting a system in a predefined state
  • process-state data (context)
  • starting a process in a predefined state
  • persistent data
  • fault-tolerance
  • late starters

49
Subscription to category
all in cat X
? in cat X
?
50
Subscription to category
all in cat X
? in cat X
? in cat X
Treated as individual subscriptions Sort spec
made available for application Mechanism is used
for persistence
51
Standard services
  • Sort name translation
  • World name translation
  • Persistent data management
  • Process restart
  • Hot stand-by

52
Introspection
  • Applications can subscribe to system data
  • hosts (nodes, machines)
  • processes
  • subscriptions
  • publications
  • data sorts
  • defined worlds
  • ...

53
  • Built-in system management functions
  • process state (active, standby, etc)
  • automatic activation of backup processes
  • hot standby within msecs
  • uses O.S. for starting new program instance
  • dynamic reconfiguration
  • health montoring
  • problem reporting
  • Monitoring tools
  • Heterogeneous systems (byte swaps)

54
Predictability
lt n tracks
lt n LDF nodes
lt n threats
55
lt n tracks
S
S
LDF
S
S
DF
lt n LDF nodes
S
S
LDF
S
S
lt n threats
56
Development support
  • Monitoring tools
  • Visualization of running applications
  • Actual connections between applications
  • Application data
  • Generic read from SDS
  • Inspection of internal Splice state

57
Development support
  • Control tool
  • Starting/stopping Splice on (remote) machines
  • Starting/stopping applications (remotely)
  • ASC uses operating system commands
  • Note that ASC is (often) practical to have, but
    is never necessary

58
Development support
  • for data definitions
  • Automatic monitoring code generation from data
    definitions
  • API call tracing
  • Many monitoring functions support Preprocessor
    development

59
Development support
  • Language bindings
  • C, (C)
  • Haskell, Clean
  • Perl
  • (Java)
  • Ada

60
Programming Models
  • Multiparadigm software architecture
  • SPLICE is well-suited for multiparadigm
    integration
  • only data-coupling (no control coupling)
  • shared dataspace provides paradigm-specific
    interface

...
CLP
Ada
61
Effect of shared data architecture
cost
functionality
time
62
Objects vs Heralds
Distributed Herald Broker (DHB) 50 times faster
than Object Request Brokers (ORBs) DHB reduces
and balances network load compared to ORBs OMG
adopting subscription semantics of DHB with new
Notification service but not DH implementation
63
History
  • Development started 25 years ago
  • Used in numerous Naval systems
  • 2nd generation now operational
  • Theoretical foundation building
  • Protected by patent (USA 5301339 (5 april 1994
    16 december 1986), Europe, )

64
Related approaches
  • Shared dataspace-like programming models
  • Linda (Yale U)
  • ADS (Hitachi)
  • Java Spaces (Sun)
  • T Space (IBM)
  • eSpeak (HP)
  • NDDS (RTI)
  • Splice (Signaal)

65
Splice
  • Technologies used in Splice finally becoming
    mainstream
  • peer-to-peer communication
  • push technology
  • process autonomy
  • data caching
  • Concept has not changed last 25 years and needs
    no change ...
  • language independent (imperative C, Ada, Java,
    C, , functional, logic, )
  • HW OS independent

66
Developments
  • Mathematical foundation
  • formal semantics
  • process algebra
  • allow reasoning about program behaviour
  • fully transparent process replication
  • Basis for numerous research development
    projects
  • distributed decision making
  • engineering support (methods, tools)
  • implementation of monotonic dataspace
  • descriptive metadata (improved support for
    interoperability)
Write a Comment
User Comments (0)
About PowerShow.com