Characterization%20of%20TCP%20flows%20over%20Large-Fat%20Networks - PowerPoint PPT Presentation

About This Presentation
Title:

Characterization%20of%20TCP%20flows%20over%20Large-Fat%20Networks

Description:

1. PFLDNet 2003. Characterization of TCP flows over Large-Fat Networks ... RTT Amsterdam - Vancouver. OC12. OC9 ... recovers from congestion events ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 21
Provided by: Ant698
Category:

less

Transcript and Presenter's Notes

Title: Characterization%20of%20TCP%20flows%20over%20Large-Fat%20Networks


1
Characterization of TCP flows over Large-Fat
Networks
  • Antony Antony, Johan Blom,
  • Cees de Laat, Jason Lee, Wim Sjouw
  • University of Amsterdam
  • 03/02/2003

2
Networks used
3
Layer - 2 requirements from 3/4
WS
WS
L2 fast-gtslow
L2 slow-gtfast
fast
fast
slow
high RTT
TCP is bursty due to sliding window protocol and
slow start algorithm. Window BandWidth RTT
BW slow
fast - slow Memory-at-bottleneck
----------- slow RTT
fast
4
5000 1 kByte UDP packets
(19 of 24)
5
Self-clocking of TCP
(20 of 25)
high RTT
WS
WS
L2 fast-gtslow
L2 slow-gtfast
fast
fast
20 µsec
14 µsec
20 µsec
20 µsec
20 µsec
6
Possible BW due to lack of buffer at the
bottleneck
Forbidden area, solutions for s when f 1 Gb/s,
M 0.5 Mbyte AND NOT USING FLOWCONTROL
s
158 ms RTT Amsterdam - Vancouver
OC12
OC9
OC6
OC3
OC1
rtt
7
Characterising a TCP Flow
  • Three different phases of TCP
  • Bandwidth discovery phase
  • Steady state
  • Congestion avoidance
  • Is it due to Implementations, Protocol or
    Philosophical ?

8
Receiver cant cope with burst
9
TCP Flow fall off from slow start without packet
a loss
10
Modifications
  • Faster Host/NIC
  • Pacing out Packets at device level.
  • HSTCP, using Net100 2.1
  • Q length on the Interface (Linux Specific)
  • IFQ manipulation using Net100
  • Changing TXQ using ifconfig

11
Adding Delay at Device level
12
NIKHEF -gt EVL 618Mbps, 3min
13
NIKHEF -gt ANL 410Mbps, Hour over a 622Mbps Path
14
Throughput vs TXQ Amsterdam to Chicago (Linux)
Linux Default (100)
15
TCP performance comparison.
Network Single Stream Multi Stream UDP
Lambda (622M) 80 540 560
Lambda (1.2G) 120 580 800
iGrid2002 120 580 800
Post iGrid2002 (HSTCP) 730 730 900
DataTag (HSTCP) 950 950 950
16
Near GigE using vanilla TCP!
Sunnyvale Chicago Amsterdam CERN
Amsterdam 196 msec Path
  • 980Mbps Sunnyvale to Amsterdam (196 msec)
  • Jumbo Frames, no congestion, entire path was
    OC192
  • Throughput drops when there is congestion
  • With HSTCP (Net100) flow recovers from congestion
    events

17
Ideal Flow (196 msec, 980Mbps)(Jumbo Frames)
18
Conclusion
  • Throughput of a TCP flow depend on Slow Start
    Behavior
  • If you get early congestion, your flow will
    probably not recover before you finish
  • TCP is not robust.
  • Is this an Implementation , Protocol or
    Philosophical problem ?

19
Future Work
  • Higher speed single TCP flows in the WAN
  • Using 10Gig NICs on the end hosts
  • Use traces captured from wire to examine
    behavior of TCP.
  • Closer look at TCP behavior over Lambdas vs.
    routed networks.

20
Thanks!
  • URL http//www.science.uva.nl/research/air/
Write a Comment
User Comments (0)
About PowerShow.com