DDS Performance Evaluation - PowerPoint PPT Presentation

1 / 31
About This Presentation
Title:

DDS Performance Evaluation

Description:

The pub/sub model is used for all tests ... Measure the Round-Trip. Testbed Configuration ... Average-/worst-case round-trip latency for DDS & dispersion result ... – PowerPoint PPT presentation

Number of Views:163
Avg rating:3.0/5.0
Slides: 32
Provided by: xio5
Category:

less

Transcript and Presenter's Notes

Title: DDS Performance Evaluation


1
DDS Performance Evaluation
  • Douglas C Schmidt
  • Ming Xiong
  • Jeff Parsons

2
Agenda
  • Motivation
  • Benchmark Targets
  • Benchmark Scenario
  • Testbed Configuration
  • Empirical Results
  • Results Analysis

3
Motivation
  • Data-Centric Publish-Subscribe (DCPS)
  • The lower layer APIs apps can use to exchange
    topic data with other DDS-enabled apps according
    to designated QoS policies
  • Data Local Reconstruction Layer (DLRL)
  • The upper layer APIs that define how to build a
    local object cache so apps can access topic data
    as if it were local
  • Goal of study
  • Gain familiarities with different DDS DCPS
    implementations
  • DLRL implementations dont exist (yet)
  • Understand performance difference between DDS
    other middleware
  • Understand performance difference between various
    DDS implementations

4
Benchmark Targets
5
Benchmark Targets (contd)
6
Benchmark Scenario
  • Two processes perform IPC (1) a client initiates
    a request to transmit a number of bytes to the
    server along with a seq_num (pubmessage) (2)
    the server simply replies with the same seq_num
    (ackmessage)
  • The invocation is essentially a two-way call,
    i.e., the publisher waits for the request to be
    completed
  • Publisher subscriber are on same machine in
    different processes
  • The pub/sub model is used for all tests
  • DDS JMS provides topic-based pub/sub model,
    CORBA Notification Service use push-model to
    implement pub/sub
  • Real-time scheduling policy is used (except for
    JMS) to alleviate interference from other
    background tasks

PubMessage
Subscriber
Publisher
AckMessage
Measure the Round-Trip
7
Testbed Configuration
  • Hostname
  • tango.dre.vanderbilt.edu
  • OS version (uname -a)
  • Linux tango.dre.vanderbilt.edu 2.6.10 1 SMP
    Sat Jan 1 175541 CST 2005 i686 GNU/Linux
  • Debian Linux Version (/etc/debian_version)
  • 3.1 Linux Kernel Version (/proc/version)
  • CPU info 4 Hyper-Threading Intel Xeon MP CPU
    1.9Ghz w/ 2G ram

8
Empirical results
  • Average-/worst-case round-trip latency for DDS
    dispersion result
  • Average-/worst-case round-trip latency for
    Notification Service dispersion result
  • Average-/worst-case round-trip latency for JMS
    dispersion result
  • Overall Comparison Results

9
DDS1
10
DDS1 Average / Max
11
DDS1 Dispersion
12
DDS2
13
DDS2 Average / Max
14
DDS2 Dispersion
15
Empirical results
  • Average-/worst-case round-trip latency for DDS
    dispersion result
  • Average-/worst-case round-trip latency for
    Notification Service dispersion result
  • Average-/worst-case round-trip latency for JMS
    dispersion result
  • Overall Comparison Results

16
CORBA Notification Service (TAO)
17
Notification Service Max/Avg
18
Notification Service Dispersion
19
Empirical results
  • Average-/worst-case round-trip latency for DDS
    dispersion result
  • Average-/worst-case round-trip latency for
    Notification Service dispersion result
  • Average-/worst-case round-trip latency for JMS
    dispersion result
  • Overall Comparison Results

20
Sun Java Messaging Service
21
JMS Max / Avg
22
JMS Dispersion
23
Empirical results
  • Average-/worst-case round-trip latency for DDS
    dispersion result
  • Average-/worst-case round-trip latency for
    Notification Service dispersion result
  • Average-/worst-case round-trip latency for JMS
    dispersion result
  • Overall Comparison Results

24
Overall comparison DDS/JMS/NS Average
25
Overall comparison DDS/JMS/NS Max
26
Overall comparison DDS/JMS/NS standard dev
27
DDS1/DDS2 comparison Avg
28
DDS1/DDS2 comparison max
29
DDS1/DDS2 comparison dispersion
30
Results Analysis
  • From the results we can see that DDS has
    significantly better performance than other SOA
    pub/sub middleware
  • On average, the two DDS products generally both
    perform well within the range of our test
    parameters
  • DDS1 shows better performance than DDS2 at a
    smaller message size scale, whereas DDS2 performs
    better at large message size scale
  • DDS1 has much less jitter max latency than DDS2

31
Future Work
  • Compare the performance of sending statically
    typed data via CORBA/DDS with sending dynamically
    typed data via XML/SOAP
  • Measure the run-time performance static/dynamic
    footprint of
  • Web Service pub/sub mechanisms, as well as other
    Service Oriented Architecture (SOA) middleware,
    compare them with our current results
  • DDS on a broader/larger range of data types
    sizes
  • Other DDS products (both C Java)
  • Other DDS QoS parameters , e.g.,
    TransPortPriority, Reliability on throughput,
    latency, jitter, scalability
  • DDS DLRL implementations (when they become
    available)
  • To evaluate the scalability of DDS
    implementations, we will use one-to-many
    many-to-many configurations in our new 56
    dual-CPU node cluster called ISISlab
  • www.dre.vanderbilt.edu/schmidt/ISISlab.html
Write a Comment
User Comments (0)
About PowerShow.com