NPS: A Non-interfering Web Prefetching System - PowerPoint PPT Presentation

About This Presentation
Title:

NPS: A Non-interfering Web Prefetching System

Description:

Department of Computer Sciences, UT Austin. 3. Outline. Prefetch aggressively as ... Department of Computer Sciences, UT Austin. 4. What is Web Prefetching? ... – PowerPoint PPT presentation

Number of Views:26
Avg rating:3.0/5.0
Slides: 28
Provided by: csUt8
Category:

less

Transcript and Presenter's Notes

Title: NPS: A Non-interfering Web Prefetching System


1
NPS A Non-interfering Web Prefetching System
  • Ravi Kokku, Praveen Yalagandula,
  • Arun Venkataramani, Mike Dahlin
  • Laboratory for Advanced Systems Research
  • Department of Computer Sciences
  • University of Texas at Austin

2
Summary of the Talk
  • Prefetching should be done aggressively, but
    safely
  • ? Safe Non-interference with demand
    requests
  • Contributions
  • A self-tuning architecture for web prefetching
  • Aggressive when abundant spare resources
  • Safe when scarce resources
  • NPS A prototype prefetching system
  • Immediately deployable

3
Outline
  • Prefetch aggressively as well as safely
  • Motivation
  • Challenges/principles
  • NPS system design
  • Conclusion

4
What is Web Prefetching?
  • Speculatively fetch data that will be accessed in
    the future
  • Typical prefetch mechanism PM96, MC98, CZ01

Client
Server
Demand Requests
Responses Hint Lists
Prefetch Requests
Prefetch Responses
5
Why Web Prefetching?
  • Benefits GA93, GS95, PM96, KLM97, CB98, D99,
    FCL99, KD99, VYKSD01,
  • Reduces response times seen by users
  • Improves service availability
  • Encouraging trends
  • Numerous web applications getting deployed
  • News, banking, shopping, e-mail
  • Technology is improving rapidly
  • ? capacities and ? prices of disks and networks

Prefetch Aggressively
6
Why doesnt everyone prefetch?
  • Extra resources on servers, network and clients
  • Interference with demand requests
  • Two types of interference
  • Self-Interference Applications hurt themselves
  • Cross-Interference Applications hurt others
  • Interference at various components
  • Servers Demand requests queued behind prefetch
  • Networks Demand packets queued or dropped
  • Clients Caches polluted by displacing more
    useful data

7
Example Server Interference
  • Common load vs. response curve
  • Constant-rate prefetching reduces server capacity

0.7
Pfrate5
0.6
0.5
0.4
Pfrate1
Avg. Demand Response Time (s)
Demand
0.3
0.2
0.1
0
100
200
300
400
500
600
700
800
Demand Connection Rate (conns/sec)
Prefetch Aggressively, BUT SAFELY
8
Outline
  • Prefetch aggressively as well as safely
  • Motivation
  • Challenges/principles
  • Self-tuning
  • Decoupling prediction from resource management
  • End-to-end resource management
  • NPS system design
  • Conclusion

9
Goal 1 Self-tuning System
  • Proposed solutions use magic numbers
  • Prefetch thresholds D99, PM96, VYKSD01,
  • Rate limiting MC98, CB98
  • Limitations of manual tuning
  • Difficult to determine good thresholds
  • Good thresholds depend on spare resources
  • Good threshold varies over time
  • Sharp performance penalty when mistuned
  • Principle 1 Self-tuning
  • Prefetch according to spare resources
  • Benefit Simplifies application design

10
Goal 2 Separation of Concerns
  • Prefetching has two components
  • Prediction What all objects are beneficial to
    prefetch?
  • Resource management How many can we actually
    prefetch?
  • Traditional techniques do not differentiate
  • Prefetch if prob(access) gt 25
  • Prefetch only top 10 important URLs
  • Wrong Way! We lose the flexibility to adapt
  • Principle 2 Decouple prediction from resource
    management
  • Prediction Application identifies all useful
    objects
  • In the decreasing order of importance
  • Resource management Uses Principle 1
  • Aggressive when abundant resources
  • Safe when no resources

11
Goal 3 Deployability
  • Ideal resource management vs. deployability
  • Servers
  • Ideal OS scheduling of CPU, Memory, Disk
  • Problem Complexity N-Tier systems, Databases,
  • Networks
  • Ideal Use differentiated services/ router
    prioritization
  • Problems Every router should support it
  • Clients
  • Ideal OS scheduling, transparent informed
    prefetching
  • Problem Millions of deployed browsers
  • Principle 3 End-to-end resource management
  • Server External monitoring and control
  • Network TCP-Nice
  • Client Javascript tricks

12
Outline
  • Prefetch Aggressively as well as safely
  • Motivation
  • Principles for a prefetching system
  • Self-tuning
  • Decoupling prediction from resource management
  • End-to-end resource management
  • NPS prototype design
  • Prefetching mechanism
  • External monitoring
  • TCP-Nice
  • Evaluation
  • Conclusion

13
Prefetch Mechanism
Fileset
Hint Server
Server m/c
1. Munger adds Javascript to html pages
2. Client fetches html page
3. Javascript on html page fetches hint list
4. Javascript on html page prefetches objects
14
End-to-end Monitoring and Control
while(1) getHint( ) prefetchHint( )
GET http//repObj.html
200 OK
if (budgetLeft) send(hints) else
send(return later)
  • Principle Low response times ? server not loaded
  • Periodic probing for response times
  • Estimation of spare resources (budget) at server
    AIMD
  • Distribution of budget
  • Control the number. of clients allowed to prefetch

15
Monitor Evaluation (1)
0.7
Manual tuning, Pfrate5
0.6
0.5
Manual tuning, Pfrate1
No-Prefetching
0.4
Avg Demand Response Time(sec)
0.3
Monitor
0.2
0.1
0
0
100
200
300
400
500
600
700
800
Demand Connection Rate (conns/sec)
  • End-to-end monitoring makes prefetching safe

16
Monitor Evaluation (2)
No-Prefetching
80
60
Bandwidth (Mbps)
40
Demand pfrate1
20
Prefetch
pfrate1
0
0
100
200
300
400
500
600
700
800
Demand Connection Rate (conns/sec)
  • Manual tuning is too damaging at high load

17
Monitor Evaluation (2)
No-Prefetching
80
PrefetchMonitor
DemandMonitor
60
Bandwidth (Mbps)
40
Demand pfrate1
20
Prefetch
pfrate1
0
0
100
200
300
400
500
600
700
800
Demand Connection Rate (conns/sec)
  • Manual tuning too timid or too damaging
  • End-to-end monitoring is both aggressive and safe

18
Network Resource Management
  • Demand and prefetch on separate connections
  • Why is this required?
  • HTTP/1.1 persistent connections
  • In-order delivery of TCP
  • So prefetch affects demand
  • How to ensure separation?
  • Prefetching on a separate server port
  • How to use the prefetched objects?
  • Javascript tricks In the paper

19
Network Resource Management
  • Prefetch connections use TCP Nice
  • TCP Nice
  • A mechanism for background transfers
  • End-to-end TCP congestion control
  • Monitors RTTs and backs-off when congestion
  • Previous study OSDI 2002
  • Provably bounds self- and cross-interference
  • Utilizes significant spare network capacity
  • Server-side deployable

20
End-to-end Evaluation
  • Measure avg. response times for demand reqs.
  • Compare with No-Prefetching and Hand-tuned
  • Experimental setup

Network Cable modem, Abilene
Fileset
Client httperf
Trace IBM server
HintSvr PPM predict
21
Prefetching with Abundant Resources
  • Both Hand-tuned and NPS give benefits
  • Note Hand-tuned is tuned to the best

22
Tuning the No-Avoidance Case
  • Hand-tuning takes effort
  • NPS is self-tuning

23
Prefetching with Scarce Resources
  • Hand-tuned damages by 2-8x
  • NPS causes little damage to demand

24
Conclusions
  • Prefetch aggressively, but safely
  • Contributions
  • A prefetching architecture
  • Self-tuning
  • Decouples prediction from resource management
  • Deployable few modifications to existing
    infrastructure
  • Benefits
  • Substantial improvements with abundant resources
  • No damage with scarce resources
  • NPS prototype
  • http//www.cs.utexas.edu/rkoku/RESEARCH/N
    PS/

25
Thanks
26
Prefetching with Abundant Resources
  • Both Hand-tuned and NPS give benefits
  • Note Hand-tuned is tuned to the best

27
Client Resource Management
  • Resources CPU, memory and disk caches
  • Heuristics to control cache pollution
  • Limit the space prefetch objects take
  • Short expiration time for prefetched objects
  • Mechanism to avoid CPU interference
  • Start prefetching after all demand done
  • Handles self-interference more common case
  • What about cross-interference?
  • Client modifications might be necessary
Write a Comment
User Comments (0)
About PowerShow.com