TCP Westwood with Faster Recovery - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

TCP Westwood with Faster Recovery

Description:

Scott Seongwook Lee (sslee_at_cs.ucla.edu) Saverio Mascolo (mascolo_at_deemail.poliba.it) ... 3 Reno start first then 2 Westwood. 100 ms link delay ... – PowerPoint PPT presentation

Number of Views:170
Avg rating:3.0/5.0
Slides: 21
Provided by: marinae
Category:
Tags: tcp | faster | lee | recovery | westwood

less

Transcript and Presenter's Notes

Title: TCP Westwood with Faster Recovery


1
TCP Westwood (with Faster Recovery)
  • Claudio Casetti (casetti_at_polito.it)
  • Mario Gerla (gerla_at_cs.ucla.edu)
  • Scott Seongwook Lee (sslee_at_cs.ucla.edu)
  • Saverio Mascolo (mascolo_at_deemail.poliba.it)
  • Medy Sanadidi (medy_at_cs.ucla.edu)
  • Computer Science Department
  • University of California, Los Angeles, USA

2
TCP Congestion Control
  • Based on a sliding window algorithm
  • Two stages
  • Slow Start, initial probing for available
    bandwidth (exponential window increase until a
    threshold is reached)
  • Congestion Avoidance,linear window increase by
    one segment per RTT
  • Upon loss detection (coarse timeout expiration or
    duplicate ACK) the window is reduced to 1 segment
    (TCP Tahoe)

3
Congestion Window of a TCP Connection Over Time
4
Shortcomings of current TCP congestion control
  • After a sporadic loss, the connection needs
    several RTTs to be restored to full capacity
  • It is not possible to distinguish between packet
    loss caused by congestion (for which a window
    reduction is in order) and a packet loss caused
    by wireless interference
  • The window size selected after a loss may NOT
    reflect the actual bandwidth available to the
    connection at the bottleneck

5
New ProposalTCP with Faster Recovery
  • Estimation of available bandwidth (BWE)
  • performed by the source
  • computed from the arrival rate of ACKs, smoothed
    through exponential averaging
  • Use BWE to set the congestion window and the Slow
    Start threshold

6
TCP FR Algorithm Outline
  • When three duplicate ACKs are detected
  • set ssthreshBWErtt (instead of ssthreshcwin/2
    as in Reno)
  • if (cwin gt ssthresh) set cwinssthresh
  • When a TIMEOUT expires
  • set ssthreshBWErtt (instead of ssthreshcwnd/2
    as in Reno) and cwin1

7
Experimental Results
  • Compare behavior of TCP Faster Recovery with Reno
    and Sack
  • Compare goodputs of TCP with Faster Recovery, TCP
    Reno and TCP Sack
  • with bursty traffic (e.g., UDP traffic)
  • over lossy links

8
FR/Reno Comparison
1 TCP 1 On/Off UDP (ONOFF100s) 5 MB buffer -
1.2s RTT - 150 Mb/s Cap.
0.16
0.14
0.12
0.1
normalized throughput
FR
0.08
0.06
0.04
Reno
0.02
0
0
100
200
300
400
500
600
700
800
Time (sec)
9
Goodput in presence of UDPDifferent Bottleneck
Sizes
6
5
FR
Reno
Sack
4
3
Goodput Mb/s
2
1
0
0
20
40
60
80
100
120
140
160
Bottleneck bandwidth Mb/s
10
Wireless and Satellite Networks
link capacity 1.5 Mb/s - single one-hop
connection
1.4e06
Tahoe
Reno
FR
1.2e06
1e06
goodput (bits/s)
800000
600000
400000
200000
0
1e-10
1e-09
1e-08
1e-07
1e-06
1e-05
0.0001
0.001
bit error rate (logscale)
11
Experiment Environment
  • New version of TCP FR called TCP Westwood
  • TCP Westwood is implemented in Linux kernel
    2.2.10.
  • Link emulator can emulate
  • link delay
  • loss event
  • Sources share bottleneck through router to
    destination.

12
Goodput Comparison with Reno (Sack)
  • Bottleneck capacity 5Mb
  • Packet loss rate 0.01
  • Larger pipe size corresponds to longer delay
  • Link delay 300ms
  • Bottleneck bandwidth 5Mb
  • Concurrent on-off UDP traffic

13
Friendliness with Reno
  • Goodput comparison when TCP-W and Reno share the
    same bottleneck
  • over perfect link
  • 5 Reno start first
  • 5 west start after 5 seconds
  • 100 ms link delay
  • Goodput comparison when TCP-W and Reno share the
    same bottleneck
  • over lossy link(1)
  • 3 Reno start first then 2 Westwood
  • 100 ms link delay
  • TCP-W improves the performance over lossy link
    but does not catch the link.

14
Current Status Open Issues
  • Extended testing of TCP WEswoh
  • Friendliness/greediness towards other TCP schemes
  • Refinements of bandwidth estimation process
  • Behavior with short-lived flows, and with large
    number of flows

15
Extra slides follow
16
Losses Caused by UDPDifferent RTT
2.6
2.4
2.2
FR
Reno
2
Sack
1.8
Goodput Mb/s
1.6
1.4
1.2
1
0.8
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
one-way RTT (s)
17
Losses Caused by UDPDiffererent Number of
Connections
11
10
9
FR1
8
Reno
7
Sack
6
Goodput Mb/s
5
4
3
2
1
0
0
5
10
15
20
25
30
no. of connections
18
TCP over Lossy linksDifferent Bottleneck Size
10
1
Goodput Mb/s
FR
Reno
Sack
0.1
0
20
40
60
80
100
120
140
160
Bottleneck bandwidth Mb/s
19
Bursty trafficdiffererent number of connections
14
12
FR
Reno
Sack
10
8
Goodput Mb/s
6
4
2
0
0
5
10
15
20
25
30
no. of connections
20
Fairness of TCP Westwood
  • Concurrent TCP-W connections goodput
  • 5 connections (other2 are similar)
  • link delay 100ms.
  • Cwnds of two TCP Westwood connections
  • over lossy link
  • concurrent UDP traffic
  • timeshifted
  • link delay 100ms
Write a Comment
User Comments (0)
About PowerShow.com