The Effects of Active Queue Management on Web Performance - PowerPoint PPT Presentation

Loading...

PPT – The Effects of Active Queue Management on Web Performance PowerPoint presentation | free to download - id: 6a1998-YTFkM



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

The Effects of Active Queue Management on Web Performance

Description:

The Effects of Active Queue Management on Web Performance SICOMM 2003 Long Le, Jay Aikat, Kevin Jeffay, F.Donelson Smith 29th January, 2004 Presented by Sookhyun, Yang – PowerPoint PPT presentation

Number of Views:2
Avg rating:3.0/5.0
Date added: 2 September 2019
Slides: 37
Provided by: YangSookHyun
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: The Effects of Active Queue Management on Web Performance


1
The Effects of Active Queue Management on Web
Performance
  • SICOMM 2003
  • Long Le, Jay Aikat, Kevin Jeffay, F.Donelson Smith

29th January, 2004 Presented by Sookhyun, Yang
2
Contents
  • Introduction
  • Problem Statement
  • Related Work
  • Experimental Methodology
  • Result and Analysis
  • Conclusion

3
Introduction
  • Drop policy
  • Drop tail when a queue overflows
  • Active queue management (AQM) before a queue
    overflows
  • Active queue management (AQM)
  • Keep the average queue size small in routers
  • RED (Random early detection) algorithm
  • Most widely studied and implemented
  • Various design issues of AQM
  • How to detect congestion
  • How to control for achieving a stable point for
    queue size
  • How congestion signal is delivered to the sender
  • Implicitly by dropping packets at the router
  • Explicitly by signal explicit congestion
    notification (ECN)

4
Problem Statement
  • Goal
  • Compare the performance of control theoretic AQM
    algorithms with original randomized dropping
    paradigms
  • Considered AQM schemes
  • Control theoretic AQM algorithms
  • Proportional integrator (PI) controller
  • Random exponential marking (REM) controller
  • Original randomized dropping paradigms
  • Adaptive random early detection (ARED) controller
  • Performance metrics
  • Link utilization
  • Loss rate
  • Response time for each request/response
    transaction

5
Contents
  • Introduction
  • Problem Statement
  • Related Work
  • Experimental Methodology
  • Platform
  • Calibration
  • Procedure
  • Result and Analysis
  • AQM Experiments with Packet Drops
  • AQM Experiments with ECN
  • Discussion
  • Conclusion

6
Random Early Detection
  • Original RED
  • Measure of congestion weighted-average queue
    size (AvgQLen)

7
Random Early Detection
  • Modification of the original RED
  • Gentle mode
  • Mark or drop probability increases linearly

8
Random Early Detection
  • Weakness of RED
  • Does not consider the number of flows sharing a
    bottleneck link
  • In TCP congestion control mechanism
  • Packet mark or drop reduces the offered load by a
    factor of
  • Self-configuring RED
  • Adjust maxp every time AvgQLen
  • ARED
  • Adaptive and gentle refinements to original RED

1- 0.5/n (n number of flows sharing the
bottleneck link)
9
Control Theoretic AQM
  • Misra et al.
  • Applied control theory to develop a model for
    TCP/AQM dynamics
  • Used this model for analyzing RED
  • Limitation of RED
  • Response to changes in network traffic
  • Use of a weighted average queue length
  • PI controller (Hollot et al.)
  • Regulate queue length to target value called
    queue reference (qref )
  • Use instantaneous samples of the queue length at
    a constant sampling frequency
  • Drop probability p(kT)
  • (q(kT) instantaneous sample of queue
    length, T1/sampling-frequency)

p(kT) a (q(kT) qref) b (q((k-1)T)
qref) p((k-1)T)
10
Control Theoretic AQM
  • REM scheme (Athuraliya et al.)
  • Periodically updates a congestion measure called
    price
  • Price p(t)
  • Rate mismatch between packet arrival and
    departure rate at the link
  • Queue difference between the actual queue length
    and target value
  • Drop probability

p(t) max( 0, p(t-1) ? (a (q(t) qref))
x(t) c ) c link capacity, q(t) queue
length, qref target value queue size, x(t)
packet arrival rate
11
Contents
  • Introduction
  • Problem Statement
  • Related Work
  • Experimental Methodology
  • Platform
  • Calibration
  • Procedure
  • Result and Analysis
  • AQM Experiments with Packet Drops
  • AQM Experiments with ECN
  • Discussion
  • Conclusion

12
Platform
  • Emulate one peering link carrying web traffics
    between sources and destinations

Network monitor
100 Mbps Ethernet interface 3Com 10/100/1000
Ethernet switches
ISP 2 router
ISP 1 router
Ethernet Switches
Ethernet Switches
100/1000 Mbps
1Gbps
1Gbps


ALTQ extensions to FreeBSD (PI, REM, ARED) 1GHz
Pentium ? 1GB of memory 1000-SX fiber gigabit
Ethernet NIC 100Mpbs Fast Ethernet NICs
100Mbps
100Mbps
Network monitor
ISP2 Browser/Servers
ISP1 Browser/Servers
13
Monitoring Program
  • Program 1 monitoring router interface
  • Effects of the AQM algorithms
  • Log of queue size sampled every 10ms along
  • Number of entering packets
  • Number of dropped packets
  • Program 2 link-monitoring machine
  • Connected to the links between the routers
  • Hubs on the 100Mbps segments
  • Fiber splitters on the Gigabit link
  • Collect TCP/IP headers
  • Locally-modified version of the tcpdump utility
  • Log of link utilization

14
Emulation of End-to-End Latency
  • Congestion control loop is influenced by RTT
  • Emulate different RTTs on each TCP connection
    (per-flow delay)
  • Locally-modified version of dummynet component of
    FreeBSD
  • Add a randomly chosen minimum delay to all
    packets from each flow
  • Minimum delay
  • Sampled from a discrete uniform distribution
  • Internet RTTs within the continental U.S.
  • RTT
  • Flows minimum delay additional delay (caused
    by queues at the routers or on the end systems)
  • TCP window size 16Kbyte on all end systems
    (widely used value)

15
Web-Like Traffic Generation
  • Model of 13
  • Based on empirical data
  • Empirical distributions describing the elements
    necessary to generate synthetic to generate
    synthetic HTTP workloads
  • Browser program and server program
  • Browser program logs response time for each
    request/response pair

16
Calibrations
  • Offered loads
  • Network traffic resulting from emulating the
    browsing behavior of a fixed size population of
    web users
  • Three critical calibrations before experiments
  • Only one primary bottleneck
  • 100Mbps links between two routers
  • Predictably controlled offered load on the
    network
  • Resulting packet arrival time-series (packet
    counts per ms)
  • Long-range dependent (LRD) behavior 14
  • Calibration experiment
  • Configure the network connecting the routers at
    1Gbps
  • Drop-tail queues having 2400 queue elements

17
Calibrations
One direction of the 1Gbps link
18
Calibrations
Heavy-tailed distribution for both user think
time and response size 13
19
Procedure
  • Experimental setting
  • Offered loads by user populations
  • 80, 90, 98, or 105 of the capacity of the
    100Mbps link
  • Run for 120 min over 10,000,000 request/response
    exchanges
  • Collect data during 90min interval
  • Repeat three times for each AQM schemes PI, REM,
    ARED
  • Experimental focus
  • End-to-end response time for each
    request/response pair
  • Loss rate fraction of IP datagram dropped at
    the link queue
  • Link utilization on the bottleneck link
  • Number of request/response exchanges completed

20
Contents
  • Introduction
  • Problem Statement
  • Related Work
  • Experimental Methodology
  • Platform
  • Calibration
  • Procedure
  • Result and Analysis
  • AQM Experiments with Packet Drops
  • AQM Experiments with ECN
  • Discussion
  • Conclusion

21
AQM Experiments with Packet Drops
  • Two target queue length of PI, REM, and ARED
  • Tradeoff between link utilization and queuing
    delay
  • 24 packets for minimum latency
  • 240 packets for high link utilization
  • Recommended in 1,6,8
  • Set the maximum queue size sfficient to ensure
    drop-tail do not occur
  • Baseline
  • Conventional drop-tail FIFO queues
  • Queue size for drop-tail
  • 24, 240 packets comparing with AQM schemes
  • 2400 packets recently favorable buffering
    equivalent to 100ms at the links transmission
    speed (from mailing list)

22
Queue Size for Drop-Tail
Drop-Tail Performance
Drop-tail queue size 240
23
Response Time at 80 Load
AQM Experiments with Packet Drops
AREM show some degradation relative to the
results on the un-congested link at 80 load
24
Response Time at 90 Load
AQM Experiments with Packet Drops
25
Response Time at 98 Load
AQM Experiments with Packet Drops
No AQM scheme can offset the performance
degradation at 98 load
26
Response Time at 105 Load
AQM Experiments with Packet Drops
All schemes degrades uniformly from the 98 case
27
AQM Experiments with ECN
  • Explicitly signal congestion to end-systems with
    an ECN bit
  • Procedure of signal congestion with ECN
  • Router mark a ECN bit in the TCP/IP header of
    the packet
  • Receiver mark TCP header of the next outbound
    segment (typically an ACK) destined for sender of
    original marked segment
  • Original sender
  • React as if a single segment had been lost within
    a send window
  • Mark the next outbound segment to confirm that it
    reacted to the congestion
  • ECN has no effect on response time of PI, REM,
    and ARED up to 80 offered load

28
Response Time at 90 Load
AQM Experiments with ECN
Both PI and REM provide response time
performance that is both close to that on
un-congested link
29
Response Time at 98 Load
AQM Experiments with ECN
Degradation, but far superior to Drop tail
30
Response Time at 105 Load
AQM Experiments with ECN
REM shows the most significant improvement in
performance with ECN
ECN has very little effect on the performance Of
ARED
31
Loss ratio/Completed requests/Link utilization
AQM Experiments with Packet Drops or with ECN
32
Summary
  • For 80 load
  • No AQM scheme provides better response time
    performance than simple drop-tail FIFO queue
    management
  • Not changed by the AQM schemes with ECN
  • For 90 load or greater without ECN
  • PI is better than drop-tail and other AQM schemes
    without ECN
  • With ECN
  • Both PI and REM provide significant response time
    improvement
  • ARED with recommended parameter settings
  • poorest response time performance
  • Lowest link utilization
  • Not changed with ECN

33
Discussion
  • Positive Impact of ECN
  • Response time performance under PI and REM with
    ECN at loads of 90 and 98
  • 90 load approximately achieved on an
    un-congested network

34
Discussion
  • Performance gap between PI and REM with packet
    dropping was closed through the addition of ECN
  • Difference in performance between ARED and the
    other AQM schemes
  • PI and REM operate in byte mode in default,
    but ARED in packet mode
  • Gentle mode in REM
  • PI and REM periodically sample the queue length
    when deciding to mark packets, but ARED uses a
    weighted average

35
Contents
  • Introduction
  • Problem Statement
  • Related Work
  • Experimental Methodology
  • Platform
  • Calibration
  • Procedure
  • Result and Analysis
  • AQM Experiments with Packet Drops
  • AQM Experiments with ECN
  • Summary
  • Conclusion

36
Conclusion
  • Unlike a similar earlier negative study on the
    use of AQM, the AQM scheme with ECN can be
    realized in practice
  • Limitation of this paper
  • Comparison between only two classes of algorithms
  • Control theoretic principles
  • Original randomized dropping paradigm
  • Studied a link carrying only web-like traffic
  • More realistic mixed of HTTP, other TCP traffic,
    and UDP traffic
About PowerShow.com