tt dafea - PowerPoint PPT Presentation

1 / 85
About This Presentation
Title:

tt dafea

Description:

Derive adaptive data delivery mechanisms considering various modes of ... efficient distributed processing of events: events are traversing the net being ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 86
Provided by: softsy
Category:

less

Transcript and Presenter's Notes

Title: tt dafea


1
Results on Data Delivery (WP3)
DBGlobe IST-2001-32645
1st Review Paphos, January 31, 2003
Proactive initiative on Global Computing (GC)
Future and Emerging Technologies (FET)
The roots of innovation
2
WP3 Outline
  • Data Delivery/Coordination
  • Task 3.1 Data delivery among the system
    components. Derive adaptive data delivery
    mechanisms considering various modes of delivery
    such as
  • push (transmission of data without an explicit
    request) and pull,
  • periodic and aperiodic,
  • multicast and unicast delivery.
  • Task 3.2 Model the coordination of the mobile
    entities using workflow management (and
    transactional workflows) and techniques used in
    the multi-agent community.

DBGlobe, 1st Annual Review
Paphos, Jan 2003
3
Timeline
Year 1
Year 2
WP3
3.1 Data Delivery
3.2 Coordination
3.3 Performance
15 18 21 24
3 6 9 12
Deliverables
D8 Data Delivery Mechanisms (Oct 2002) D9
Modeling Coordination Through Workflows (April
2003) D10 Data Delivery and Querying (August
2003)
DBGlobe, 1st Annual Review
Paphos, Jan 2003
4
Outcomes of WP3 so far
D8 Data Delivery Mechanisms A taxonomy of
mechanisms An outline of potential use within the
DBGlobe architecture
  • A number of specific results in data delivery in
    Global Computing
  • Coherent Push-based Data Delivery
  • Adaptive Multi-version Broadcast Data Delivery
  • Efficient Publish-Subscribe Data Delivery

DBGlobe, 1st Annual Review
Paphos, Jan 2003
5
In this presentation
Just a note on the different modes Summary of
technical results 1. Coherent Data Delivery 2.
Adaptive Multi-version Broadcast Data
Delivery 3. Efficient Publish-Subscribe Data
Delivery
DBGlobe, 1st Annual Review
Paphos, Jan 2003
6
D8 Taxonomy of Different Modes of Data Delivery
Data Delivery Modes in Global Computing
Client Pull vs. Server Push pull-based transfer
of information is initiated by the
client push-based server-initiated, servers send
information to clients without any specific
request. push is scalable but clients may
receive irrelevant data hybrid schemes hot data
are pushed and cold data are pulled Aperiodic
vs. Periodic aperiodic delivery usually
event-driven a data request (for pull) or
transmission (for push) is triggered by an event
(i.e. a user action for pull or a data update for
push). periodic delivery performed according to
some pre-arranged schedule
DBGlobe, 1st Annual Review
Paphos, Jan 2003
7
D8 Taxonomy of Different Modes of Data Delivery
Unicast vs 1-N Unicast from a data source
(server) to the client 1-to-N data sent
received by multiple clients multicast and
broadcast Data vs. Query Shipping Based on the
unit of interaction between clients and data
sources Depends on whether the data sources have
query processing capabilities Query shipping may
result in reducing the communication load, since
only relevant data sets are delivered to the
client.
DBGlobe, 1st Annual Review
Paphos, Jan 2003
8
D8 Taxonomy of Different Modes of Data Delivery
push
Publish/subscribe (aperiodic, 1-N, push)
Email list ( aperiodic, unicast, push)
pull
Polling ( periodic, unicast, pull)
1-N
broadcast
unicast
periodic
aperiodic
Request/Response (aperiodic, unicast, pull)
DBGlobe, 1st Annual Review
Paphos, Jan 2003
9
Outline
A note on the different modes Summary of
technical results 1. Coherent Push-based Data
Delivery 2. Adaptive Multi-version Broadcast
Data Delivery 3. Efficient Publish-Subscribe
Data Delivery
DBGlobe, 1st Annual Review
Paphos, Jan 2003
10
Coherent Data Delivery
The Data Broadcast Push Model
  • The server broadcasts data from a database to a
    large number of clients
  • push mode no direct communication with the
    server (stateless server, e.g., sensors)
  • client-side protocols
  • Data updates at the server
  • Periodic updates for the values on the channel

Broadcast Channel
Server
Client
  • Efficient way to disseminate information to
    large client populations with similar interests
  • Physical support in wireless networks
    (satellite, cellular)
  • Various other applications, sensor networks,
    data streams

11
Coherent Data Delivery
Our Goal
Ensure that clients receive temporally coherent
(e.g., current) and semantically coherent
(transaction-wise) data
  • Provide a model for temporal and semantic
    coherency
  • Show what type of coherency we get if there are
    no additional protocols
  • Show what type of coherency is achieved by a
    number of protocols proposed in the literature
    (and their extensions)

12
Coherent Data Delivery
Currency properties of the readset (set of items
read and their values) based on currency of the
items in the readset
(Currency Interval of an Item) where cb is the
time instance the value of x read by R was stored
in the database and ce is the time instance of
the next change of this value in the database.
If the value read by R has not been changed
subsequently, ce is infinity.
CI(x, R) currency interval of x in the readset
of R cb, ce)
  • Based on CI(x, R), two types of currency of the
    readset of a transaction R
  • Overlapping
  • Oldest-value

13
Coherent Data Delivery
  • ?, say cb, ce) overlapping current, with
    overlapping currency, Overlap(R) ce- (if ce is
    not infinity),
  • current_time (otherwise)

? (x, u) ? RS(R) CI(x, R)
there is an interval of time that is included in
the currency interval of all items in R's readset
In general, oldest value currency of a
transaction R, denoted OV (R), ce-, where ce
is the smallest among the endpoints of the CI(x,
R), for every x, (x, u) ? RS(R).
If R is overlapping current, Overlap(R) OV(R)
14
Coherent Data Delivery
If not overlapping, we want to measure the
discrepancy among the database states seen by a
transaction temporal spread
(Temporal Spread of a Readset) Let min_ce be the
smallest among the endpoints and max_cb the
largest among the begin-points of the CI(x, R)
for x in the readset of a transaction R.
temporal_spread(R) max_cb - min_ce, if max_cb
gt min_ce 0 otherwise.
For an overlapping current transaction, the
temporal spread is zero!
15
Coherent Data Delivery
Example
R1 reads x1, x2, x3, x4
CI(x1, R1)
CI(x2, R1)
CI(x3, R1)
CI(x4, R1)
2 4 6 8 10 12 14 16 18 20
Overlapping current with Overlap(R) 8 and
temporal_spread(R) 0
16
Coherent Data Delivery
Example
R1 reads x1, x2, x3, x4
CI(x1, R1)
CI(x2, R1)
Oldest value read (min_ce)
CI(x3, R1)
max_cb (most current)
CI(x4, R1)
2 4 6 8 10 12 14 16 18 20
Not Overlapping, but OV(R) 8 and
temporal_spread(R) 9 8 1
17
Coherent Data Delivery
Example
R1 reads x1, x2, x3, x4
CI(x1, R1)
CI(x2, R1)
Oldest value read (min_ce)
CI(x3, R1)
max_cb (most current)
CI(x4, R1)
2 4 6 8 10 12 14 16 18 20
Not Overlapping, but OV(R) 8 and
temporal_spread(R) 15 8 9
18
Coherent Data Delivery
Besides discrepancy, currency (how old are the
values seen)
(Transaction-Relative Currency) R is relative
overlapping current with respect to time instance
t, if t ?CI(x, R), ? x read by R. R is
relative oldest-value current with respect to
time instance t, if t OV(R).
(Temporal Lag) Let tc be the largest t
tcommit_R, with respect to which R is relative
(overlapping or oldest value) current, then
temporal_lag(R) tcommit_R - tc.
The smaller the temporal lag and the temporal
spread, the higher the temporal coherency of a
read transaction.
best temporal coherency when overlapping relative
current with respect to tcommit_R (both the time
lag and the temporal spread are zero).
19
Coherent Data Delivery
Example
R1
CI(x1, R1)
CI(x2, R1)
CI(x3, R1)
CI(x4, R1)
2 4 6 8 10 12 14 16 18 20
Overlapping current with Overlap(R) 8
temporal_spread(R) 0 temporal_lag(R) 0
20
Coherent Data Delivery
Example
R1
CI(x1, R1)
CI(x2, R1)
CI(x3, R1)
CI(x4, R1)
2 4 6 8 10 12 14 16 18 20
Overlapping current with Overlap(R) 8
temporal_spread(R) 0 temporal_lag(R) 12 8
4
21
Coherent Data Delivery
Example
R1
CI(x1, R1)
CI(x2, R1)
CI(x3, R1)
CI(x4, R1)
2 4 6 8 10 12 14 16 18 20
Overlapping current with Overlap(R) 8
temporal_spread(R) 0 temporal_lag(R) 19 8
11
22
Coherent Data Delivery
  • What is the coherency of R (temporal lag and
    spread) if R just
  • reads items from the broadcast?
  • Let tlastread_R be the time instance R performs
    its last read.
  • temporal_lag(R) tcommit_R - begin_cycle(tbegin_R
    ) and temporal_spread(R) tlastread_R -
    begin_cycle(tbegin_R)
  • (tight bounds)There are cases that we get the
    worst lag and spread
  • If pu 0 (immediate updates), best (worst) lag
    and spread
  • If all items from the same cycle, spread is 0,
    and lag pu

23
Coherent Data Delivery
Basic Techniques
  • Protocols fall in two broad categories
  • invalidation (which corresponds to broadcasting
    the endpoints (ces) of the currency interval for
    each item)
  • Periodically broadcast, IR, a list with the items
    that have been updated since the broadcast of the
    previous IR
  • versioning (which corresponds to broadcasting
    the begin points (cbs) of the currency interval
    for each item)
  • With each item, broadcast a timestamp (version)
    when it was created

And a hybrid protocol that combines versioning
and invalidation
24
Coherent Data Delivery
Definitions of Semantic Coherency
(Consistency) C0 C1 RS(R) ? DS (subset of a
consistent database state) C2 R serializable
with the set of server transactions that read
values read (directly or indirectly) by R C3 R
serializable with the all server transactions C4
R serializable with the all server transactions
and the serializability order of the server
transactions that R observes is consistent with
the commit order of transactions at the server
Rigorous schedules commit order compatible with
the serialization order
25
Coherent Data Delivery
Reading from a single cycle
If transaction R reads all items from the same
cycle, it is C1 but not necessarily C2
If the server schedule is rigorous and R reads
all items from the same cycle, it is C4
26
Coherent Data Delivery
Read Test Theorem
It suffices to check for violation of C2, C3, and
C4 by a client transaction R when R reads a data
item if and only if the server schedule is
rigorous Protocols
27
Coherency in Broadcast-Based Dissemination
Future Work
  • Multiple Servers What is the semantic and
    temporal coherency the client gets
  • Performance Evaluation of the various types of
    coherency

Reference
E. Pitoura, P. K. Chrysanthis and K. Ramamritham.
Characterizing the Temporal and Semantic
Coherency of Broadcast-based Data Dissemination.
Proc. of the 9th International Conference on
Database Theory (ICDT03), January 2003, Siena,
Italy.
28
Outline
A note on the different modes Summary of
technical results 1. Coherent Push-based Data
Delivery 2. Adaptive Multi-version Broadcast
Data Delivery 3. Efficient Publish-Subscribe
Data Delivery
DBGlobe, 1st Annual Review
Paphos, Jan 2003
29
Multi-version Broadcast
Similar Model BUT The server (data source) at
each cycle sends not just one value per item but
instead multiple versions per item Enhance
concurrency (similar to multi-version schemes in
traditional client-server systems) Access
multiple server states access to data series
when not enough memory Multiple data servers
share the channel (multi-sensors networks)
30
Multi-version Broadcast
  • Develop multiple version models to enhance
    concurrency
  • Maintain the overhead low and increase the
    number of consistent client transactions
  • 10 updated item per cycle
  • If we broadcast 2 versions 20 increase, abort
    rate reduces from above 60 to below 20
  • Increase tolerance to disconnections
  • From 25 to 90 depending on the frequency and
    the duration of the disconnection

31
Multi-version Broadcast
Issues How should the broadcast be organized?
select the order according to which items are
broadcast What are appropriate client-cache
protocols?
32
Multi-version Broadcast
Basic Organization Vertical (per version)
broadcast all items with a specific version
number Horizontal (per item) broadcast all
versions of each item
33
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7
  • Actual encoding requires auxiliary symbols!
  • Longer sequences of repetitive data first!

V030224133015V1219V2137
Extensions for broadcast disk organizations (broa
dcast disk frequency of broadcasting an item
depends on its popularity)
34
Adaptability Performance depends on client access
patterns
Multi-version Broadcast
Client Access Vertical (or snapshot) a client
accesses items of a specific version Horizontal
(or historical) a client accesses all versions
of a specific item Random
35
Multi-version Broadcast
  • Client Cache
  • Extend LRU
  • Timestamp (classic LRU)
  • Version number (replace the oldest item)
  • F Compression Rate (replace the less frequently
    updated)
  • Replacement based on a weighted sum of the above
  • Autoprefetch to update F

36
Multi-version Broadcast
Results Better performance when the broadcast
organization follows the client access
type Client cache improves access for clients
with different access types
37
Multi-version Broadcast
References
E. Pitoura and P. K. Chrysanthis. Multiversion
Data Broadcast, IEEE Transactions on Computers
51(10)1224-1230, October, 2002 O. Shigiltchoff,
P. K. Chrysanthis and E. Pitoura. Multi-version
Data Broadcast Organizations. In Proc. of the
6th East European Conference on Advances in
Databases and Information Systems (ADBIS),
September 2002, Bratislava, Sloavakia O.
Shigiltchoff, P. K. Chrysanthis and E. Pitoura.
Adaptive Multi-version Data Broadcast
Organizations, In preparation for journal
publication
38
Outline
A note on the different modes Summary of
technical results 1. Coherent Push-based Data
Delivery 2. Adaptive Multi-version Broadcast
Data Delivery 3. Efficient Publish-Subscribe
Data Delivery
DBGlobe, 1st Annual Review
Paphos, Jan 2003
39
Framework
Publish/subscribe
  • Personalized Information Delivery Pub/Sub
    Systems
  • Subscriptions description of interests/services
  • Events service invocation
  • Match events to all subscriptions impacted

40
Basic Model
Publish/subscribe
  • One or more Event Sources (ES) / Producers
  • produces events in response to changes to a real
    world variable that it monitors.

An Event Brokering System (EBS) It consists of
one or more brokers. The events are published to
the Event Brokering System, which matches them
against a set of subscriptions, submitted by
users in the system. One or more Event
Displayers (ED) / Consumers If a users
subscription matches the event, it is forwarded
to the Event Displayer for that user. The ED is
alerting the user.
41
The Problem
Publish/subscribe
  • Key problems when designing and implementing
    large-scale publish/subscribe systems
  • efficient propagation of subscriptions and
  • distributed event processing
  • among the brokers of the system

42
Motivation and Contributions
Publish/subscribe
  • Key idea
  • Summarize subscriptions and
  • process summaries as opposed to subscriptions.

43
Publish/subscribe
  • Detailed Contributions
  • A mechanism compacting subscription information
    (Per-broker Subscription Summary).
  • Supports event/subscription schemata that are
    rich with respect to the attribute types and
    powerful with respect to the operators for these
    attributes.
  • 2. A protocol for efficient distribution of
    subscription summaries to brokers and for merging
    them at each hop (multi-broker summaries).
  • 3. A protocol for the efficient distributed
    processing of events events are traversing the
    net being content routed based on the
    multi-broker summaries, as they are matched to
    subscriptions from all brokers.
  • 4. An event matching algorithm for the
    subscription summaries.

44
Conclusions
Publish/subscribe
  • Great savings for both subscription propagation
    and distributed event processing!!
  • Specifically
  • net bw several times better than Siena and up to
    orders of magnitude better than baseline
    approach),
  • hop counts for subscription propagation several
    times better than Siena
  • hop counts for event processing better than
    Siena, up to 90 probability of subscription
    subsumption
  • Storage requirements several times better than
    Siena

45
Multi-version Broadcast
References
P. Triantafillou and A. Economides,
Subscription Summaries for Scalability and
Efficiency in Publish/Subscribe Systems, 1st
Intl. IEEE Workshop on Distributed Event-based
Systems, (DEBS02) July 2002. P. Triantafillou
and A. Economides, Efficient Distributed Event
Processing using Subscription Summaries in Large
Scale Publish/Subscribe System, Submitted for
Publication
46
NEXT YEAR
COORDINATION TASK INTEGRATION of DATA DELIVERY
QUEYING
47
Results on Data Delivery (WP3)
DBGlobe IST-2001-32645
1st Review Paphos, January 31, 2003
Proactive initiative on Global Computing (GC)
Future and Emerging Technologies (FET)
The roots of innovation
48
Extra Slides
49
Extra Slides (coherency)
50
Coherent Data Delivery
The Model
  • The server repetitively pushes data from a
    database to a large number of clients
  • sequential client access
  • asymmetry
  • large number of clients
  • transmission capabilities
  • Client-site protocols
  • The server is stateless
  • Data updates at the server

Server
Client
Broadcast Channel
51
Coherency in Broadcast-Based Dissemination
Updates
Data are updated at the server What is the value
broadcast at time instance t? we assume periodic
updates with an update frequency or period of pu
meaning that the value placed at time t is the
value of the item at the beginning of the update
period denoted begin_cycle(t) For periodic
broadcast, usually pu is equal to the broadcast
period
52
Coherent Data Delivery
Preliminary Definitions
Database state set of (data item, value)
pairs Readset of a transaction R, RS(R) set of
(data item, values) that R read BSc the content
of the broadcast at the cycle that starts at time
instance c (again a set of (data item, value)
pairs R may read items from different broadcast
cycles, thus items in RS(R) may correspond to
different database states
53
Coherent Data Delivery
(Currency Interval of an Item) CI(x, R)
currency interval of x in the readset of R
cb, ce) where cb is the commit time of the
transaction that wrote the value of x read by R
ce is the commit time of the transaction that
updated x immediately after or infinity
54
Semantic Coherency Model
Variations instead of a single client
transaction a set S of client transactions
Example C3- site All transactions of a client
serializable with all server transactions C3 -
All
55
Relating Semantic and Temporal Coherency
  • Assumptions
  • Server schedules are serializable
  • Broadcast only committed values

If R is overlapping current, then it is C1
consistent
56
Relating Semantic and Temporal Coherency
(Currency Interval of an Item) CI(x, R)
currency interval of x in the readset of R
cb, ce) where cb is the commit time of the
transaction that wrote the value of x read by R
ce is the commit time of the transaction that
updated x immediately after or infinity
  • Note
  • overlapping currency similar to vintage
    transactions Server schedules are serializable
    ce-vinatge
  • semantic currency similar to t-bound, if OV(R)
    to, then to-bound

57
Coherency in Broadcast-Based Dissemination
Previous Work
  • cache consistency
  • (e.g., BarbaraImielinski, SIGMOD95, Acharya et
    al, VLDB1996)
  • Datacycle Bowen et al, CACM92 hardware for
    detecting changes
  • Extended for multiple servers BanerjeeLi,
    JCI94
  • Certification reports Barbara, ICDCS97
  • F-Matrix for (update (C2) consistency)
    Shanmugasundaram, SIGMOD99
  • SGT Graph (for serializability) Pitoura,
    ER-Workshop98, Pitoura, DEXA-Workshop98,
    PitouraChrysanthis, ICDCS99
  • Multiple Versions PitouraChrysanthis, VLDB99
    PitouraChrysanthis, IEEE TOC 2003

58
Basic Organization
Multi-version Broadcast
Versions
Data items
Broadcast 1
Broadcast 2
Broadcast 3
Vertical
59
Basic Organization
Multi-version Broadcast
Versions
Data items
Broadcast 1
Broadcast 2
Broadcast 3
Horizontal
60
Broadcast format
Multi-version Broadcast
Data element 1 1 1 3 5 7 7 4 4 4
Sequential
Broadcast 1 1 1 3 5 7 7 4 4 4
Data element 1 1 1 3 5 7 7 4 4 4
Compressed
Broadcast 1x2 3 5 7x1 4x2
61
V0V3- versions Data 0 Data 3- data items
Multi-version Broadcast
  • Compression
  • V0 V1 V2 V3 V0 V1 V2 V3
  • Data 0 2 2 2 2 2x3
    - - -
  • Data 1 5 9 9 9 5
    9x2 - -
  • Data 2 4 4 4 4 4x3
    - - -
  • Data 3 3 3 7 7 3x1
    - 7x1 -

Horizontal compression 2x3 5 9x2 4x3 3x1
7x1 Vertical compression 2x3 5 4x3 3x1 9x2
7x1
62
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7
  • Actual encoding requires auxiliary symbols!
  • Longer sequences of repetitive data first!

V030224
63
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133
64
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133015
65
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133015V1219
66
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7
  • Actual encoding requires auxiliary symbols!
  • Longer sequences of repetitive data first!

V030224
67
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133
68
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133015
69
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133015V1219
70
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7
  • Actual encoding requires auxiliary symbols!
  • Longer sequences of repetitive data first!

V030224
71
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133
72
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133015
73
Multi-version Broadcast
Compression
  • V0 V1 V2 V3
  • Data 0 2 2 2 2
  • Data 1 5 9 9 9
  • Data 2 4 4 4 4
  • Data 3 3 3 7 7

V030224133015V1219
74
Event Subscription Types
Publish/subscribe
  • Event Schema
  • an untyped set of typed attributes.
  • Each attribute consists of a type, a name and a
    value.
  • The type of an attribute belongs to a predefined
    set of primitive data types commonly found in
    most programming languages.
  • The attributes name is a simple string, while
    the value can be in any range defined by the
    corresponding type.
  • The whole structure of type name value for
    all attributes constitutes the event itself.

75
Event Subscription Types (contd)
Publish/subscribe
  • Subscription Schema
  • Can contain all attribute data types.
  • Supports all interesting operators (, lt, gt,
    ranges, prefix, suffix, containment, etc.).
  • An attribute can have more than one constraint in
    the same subscription.

Event Matching Constraint There is a match if and
only if all the subscriptions attributes
constraints are satisfied.
76
Results
Publish/subscribe
  • Vs Siena and baseline approaches
  • Performance analysis on subscription propagation
  • net bandwidth savings
  • Hop counts
  • Performance analysis on distributed event
    processing
  • Hop counts
  • Storage Requirements to maintain subscription
    summaries vs subscriptions
  • Complexity analysis for event-matching algorithm

77
Performance Analysis
Publish/subscribe
  • Bandwidth requirements for subscription
    propagation
  • From 3X to 6X better than Siena
  • Orders of magnitude better than baseline
    (subscription bcast)

78
Performance Analysis (contd)
Publish/subscribe
  • Mean number of hops needed for subscription
    propagation
  • Better than Siena by 2X 15X

79
Performance Analysis (contd)
Publish/subscribe
  • Mean number of hops needed for event propagation
  • Better than Siena except when prob. of
    subsumption is 90.

80
Performance Analysis (contd)
Publish/subscribe
  • Storage Requirements for Subscriptions
  • 5X better than Siena

81
Performance Analysis
Publish/subscribe
  • Bandwidth requirements for subscription
    propagation
  • From 3X to 6X better than Siena
  • Orders of magnitude better than baseline
    (subscription bcast)

82
Performance Analysis (contd)
Publish/subscribe
  • Mean number of hops needed for subscription
    propagation
  • Better than Siena by 2X 15X

83
Performance Analysis (contd)
Publish/subscribe
  • Mean number of hops needed for event propagation
  • Better than Siena except when prob. of
    subsumption is 90.

84
Performance Analysis (contd)
Publish/subscribe
  • Storage Requirements for Subscriptions
  • 5X better than Siena

85
DBGlobe IST-2001-32645
Write a Comment
User Comments (0)
About PowerShow.com