TCP Nice: A Mechanism for Background Transfers - PowerPoint PPT Presentation

About This Presentation
Title:

TCP Nice: A Mechanism for Background Transfers

Description:

TCP Nice Evaluation. Micro-benchmarks (ns simulation) ... determine if Nice can get useful throughput. 50-80% of spare capacity ... – PowerPoint PPT presentation

Number of Views:90
Avg rating:3.0/5.0
Slides: 44
Provided by: arunvenka
Category:

less

Transcript and Presenter's Notes

Title: TCP Nice: A Mechanism for Background Transfers


1
TCP Nice A Mechanism for
Background Transfers
  • Arun Venkataramani, Ravi Kokku, Mike Dahlin
  • Laboratory of Advanced Systems Research(LASR)
  • Computer Sciences, University of Texas at Austin

2
What are background transfers?
  • Data that humans are not waiting for
  • Non-deadline-critical
  • Unlimited demand
  • Examples
  • Prefetched traffic on the Web
  • File system backup
  • Large-scale data distribution services
  • Background software updates
  • Media file sharing

3
Desired Properties
  • Utilization of spare network capacity
  • No interference with regular transfers
  • Self-interference
  • applications hurt their own performance
  • Cross-interference
  • applications hurt other applications performance

4
Goal Self-tuning protocol
  • Many systems use magic numbers
  • Prefetch threshold DuChamp99, Mogul96,
  • Rate limiting Tivoli01, Crovella98,
  • Off-peak hours Dykes01, many backup
    systems,
  • Limitations of manual tuning
  • Difficult to balance concerns
  • Proper balance varies over time
  • Burdens application developers
  • Complicates application design
  • Increases risk of deployment
  • Reduces benefits of deployment

5
TCP Nice
  • Goal abstraction of free infinite bandwidth
  • Applications say what they want
  • OS manages resources and scheduling
  • Self tuning transport layer
  • Reduces risk of interference with FG traffic
  • Significant utilization of spare capacity by BG
  • Simplifies application design

6
Outline
  • Motivation
  • How Nice works
  • Evaluation
  • Case studies
  • Conclusions

7
Why change TCP?
  • TCP does network resource management
  • Need flow prioritization
  • Alternative router prioritization
  • More responsive, simple one bit priority
  • Hard to deploy
  • Question
  • Can end-to-end congestion control achieve
    non-interference and utilization?

8
Traditional TCP Reno
  • Adjusts congestion window (cwnd)
  • Competes for fair share of BW (cwnd/RTT)
  • Packet losses signal congestion
  • Additive Increase
  • no losses ? Increase cwnd by one packet per RTT
  • Multiplicative Decrease
  • one packet loss (triple duplicate ack) ?halve
    cwnd
  • multi-packet loss ? set cwnd 1
  • Problem signal comes after damage done

9
TCP Nice
  • Proactively detects congestion
  • Uses increasing RTT as congestion signal
  • Congestion ? incr. queue lengths ? incr. RTT
  • Aggressive responsiveness to congestion
  • Only modifies sender-side congestion control
  • Receiver and network unchanged
  • TCP friendly

10
TCP Vegas
  • Early congestion detection using RTTs
  • Additive decrease on early congestion
  • Restrict number of packets maintained by each
    flow at bottleneck router
  • Small queues, yet good utilization

11
TCP Nice
  • Basic algorithm
  • 1. Early Detection ? thresh. queue length? incr.
    in RTT
  • 2. Multiplicative decrease on early congestion
  • 3. Maintain small queues like Vegas
  • 4. Allow cwnd lt 1.0
  • per-ack operation
  • if(curRTT gt minRTT threshold(maxRTT minRTT))
  • numCong
  • per-round operation
  • if(numCong gt f.W)
  • W ? W/2
  • else Vegas congestion control

12
Nice the works
  • Non-interference ?getting out of the way in time
  • Utilization ?maintaining a small queue

13
Evaluation of Nice
  • Analysis
  • Simulation micro-benchmarks
  • WAN measurements
  • Case studies

14
Theoretical Analysis
  • Prove small bound on interference
  • Main result interference decreases
    exponentially with bottleneck queue capacity,
    independent of the number of Nice flows
  • Guided algorithmic design
  • All 4 aspects of protocol essential
  • Unrealistic model
  • Fluid approximation
  • Synchronous packet drop

15
TCP Nice Evaluation
  • Micro-benchmarks (ns simulation)
  • test interference under more realistic
    assumptions
  • Near optimal interference
  • 10-100X less than standard TCP
  • determine if Nice can get useful throughput
  • 50-80 of spare capacity
  • test under stressful network conditions
  • test under restricted workloads, topologies

16
TCP Nice Evaluation
  • Measure prototype on WAN
  • test interference in real world
  • determine if throughput is available to Nice
  • limited ability to control system
  • Results
  • Nice causes minimal interference
  • standard TCP significant interference
  • Significant spare BW throughout the day

17
NS Simulations
  • Background traffic
  • Long-lived flows
  • Nice, rate limiting, Reno, Vegas, Vegas-0
  • Ideal Router prioritization
  • Foreground traffic Squid proxy trace
  • Reno, Vegas, RED
  • On/Off UDP
  • Single bottleneck topology
  • Parameters
  • spare capacity, number of Nice flows, threshold
  • Metric
  • average document transfer latency

18
Network Conditions
1e3
100
10
V0
Document Latency (sec)
Reno
Nice
1
Vegas
Router Prio
0.1
1
10
100
Spare Capacity
  • Nice causes low interference even when there
    isnt much spare capacity.

19
Scalability
  • W lt 1 allows Nice to scale to any number of
    background flows

20
Utilization
  • Nice utilizes 50-80 of spare capacity w/o
    stealing any bandwidth from FG

21
Case study 1 Data Distribution
  • Tivoli Data Exchange system
  • Complexity
  • 1000s of clients
  • different kinds of networks phone line, cable
    modem, T1
  • Problem
  • tuning data sending rate to reduce interference
    and maximize utilization
  • Current solution
  • manual setting for each set of clients

22
Case study 1 Data Distribution
23
Case study 2 Prefetch-Nice
  • Novel non-intrusive prefetching system USITS03
  • Eliminates network interference using TCP Nice
  • Eliminates server interference using simple
    monitor
  • Readily deployable
  • no modifications to HTTP or the network
  • JavaScript based
  • Self-tuning architecture, end-to-end interference
    elimination

24
Case study 2 Prefetch-Nice
Hand-tuned threshold
Response time
Aggressive threshold Self-tuning protocol
Aggressiveness
  • A self-tuning architecture gives optimal
    performance for any setting

25
Prefetching Case Study
26
Prefetching case study
27
Conclusions
  • End-to-end strategy can implement simple
    priorities
  • Enough usable spare bandwidth out there that can
    be Nicely harnessed
  • Nice makes application design easy

28
TCP Nice Evaluation Key Results
  • Analytical
  • interference bounded by factor that falls
    exponentially with bottleneck queue size
    independent of no. of Nice flows
  • all 3 aspects of algorithm fundamentally
    important
  • Microbenchmarks (ns simulation)
  • near optimal interference
  • interference up to 10x less than standard TCP
  • TCP-Nice reaps substantial fraction of spare BW
  • Measure prototype on WAN
  • TCP Nice Minimal interference
  • standard TCP Significant interference
  • significant spare BW available throughout the day

29
Freedom from thresholds
  • Dependence on threshold is weak
  • - small threshold )low interference

30
Manual tuning
  • Prefetching attractive, restraint necessary
  • prefetch bandwidth limit DuChamp99, Mogul96
  • threshold probability of access Venk01
  • pace packets by guessing users idle time
    Cro98
  • prefetching during off-peak hours Dykes01
  • Data distribution / Mirroring
  • tune data sending rates on a per-user basis
  • Windows XP Background (BITS)
  • rate throttling approach
  • different rates ? different priority levels

31
Goal Self-tuning protocol
  • Hand-tuning may not work
  • Difficult to balance concerns
  • Proper balance varies over time
  • Burdens application developers
  • Complicates application design
  • Increases risk of deployment
  • Reduces benefits of deployment

32
Nice the works
  • Nested congestion triggers
  • Nice lt Vegas lt Reno

33
TCP Vegas
Expected
Diff
m
Actual
a
Throughput
b
1/minRTT
Window
W
  • E W / minRTT
  • A W /observedRTT
  • Diff (E A)/minRTT

if(Diff lt a / minRTT) W Ã W 1 else
if(Diff gt b / minRTT) W Ã W - 1
  • Additive decrease when packets queue

34
Nice features
  • Small a, b not good enough. Not even zero!
  • Need to detect absolute queue length
  • Aggressive backoff necessary
  • Protocol must scale to large number of flows
  • Solution
  • Detection ! threshold queue build up
  • Avoidance ! Multiplicative backoff
  • - Scalability ! window is allowed to drop below 1

35
Internet measurements
  • Long Nice and Reno flows
  • Interference metric flow completion time

36
Internet measurements
  • Long Nice and Reno flows
  • Interference metric flow completion time
  • Extensive measurements over different kinds of
    networks
  • - Phone line
  • - Cable modem
  • - Transcontinental link (Austin to London)
  • - University network (Abilene UT Austin to
    Delaware)

37
Internet measurements
  • Long Nice and Reno flows
  • Interference metric flow completion time

38
Internet measurements
  • Long Nice and Reno flows
  • Interference metric flow completion time

39
Internet measurements
  • Useful spare capacity throughout the day

40
Internet measurements
  • Useful spare capacity throughout the day

41
NS Simulations - Red
42
NS Simulations - Red
43
Analytic Evaluation
  • All three aspects of protocol essential
  • Main result interference decreases
    exponentially with bottleneck queue capacity
    irrespective of the number of Nice flows
  • fluid approximation model
  • long-lived Nice and Reno flows
  • queue capacity gtgt number of foreground flows
Write a Comment
User Comments (0)
About PowerShow.com