Static Optimality and Dynamic Search Optimality in Lists and Trees PowerPoint PPT Presentation

presentation player overlay
About This Presentation
Transcript and Presenter's Notes

Title: Static Optimality and Dynamic Search Optimality in Lists and Trees


1
Static Optimalityand Dynamic Search
Optimalityin Lists and Trees
  • Avrim Blum
  • Shuchi Chawla
  • Adam Kalai

1/6/2002
2
List Update Problem
4 7 2 9 12 3 6 8
Query for element 9
  • Unordered list
  • Access for xi takes i time

3
List Update Problem
9 4 7 2 12 3 6 8
Query for element 9
  • Unordered list
  • Access for xi takes i time
  • Replacement cost 0
  • Reorder cost 1 per operation
  • What should the reordering policy be?

4
Binary Search Tree
7
2
12
9
14
5
4
15
  • In-order tree
  • Search cost depth of element
  • Cost of rotations 1 per operation
  • Replacement policy ?

5
Binary Search Tree
9
7
12
Query for element 9
2
14
15
5
4
  • In-order tree
  • Search cost depth of element
  • Cost of rotations 1 per operation
  • Replacement policy ?

6
How good is a reordering algorithm?
  • Compare against the best offline algorithm
  • Dynamic competitive ratio
  • Best offline algorithm that cannot change state
    of the list/tree -- static
  • Static competitive ratio

7
Optimality
  • Cost of ALG is at most a constant factor times
    the cost of OPT
  • Dynamic optimality
  • Static optimality
  • Strong static optimality the ratio is (1?)
  • Search optimality ignore our rotation cost

8
Known results..
  • List update
  • Dynamic ratio Albers et al95, Teia93 u.b.-
    1.6 l.b.- 1.5
  • Trees
  • Splay trees Sleator Tarjan 85 static ratio 3

9
Known results..
  • List update
  • Dynamic ratio Albers et al95, Teia93 u.b.-
    1.6 l.b.- 1.5
  • Trees
  • Splay trees Sleator Tarjan 85 static ratio
    3
  • Open questions that we address
  • Strong static opt dynamic opt for lists
  • Dynamic search opt for trees
  • - ignoring computation time rotation costs

10
Back to list update
  • How can we hope to achieve strong static
    optimality?
  • Soln Use a classic machine learning result!!
  • Experts Algorithm

11
Experts Algorithm Littlestone94
  • N expert algorithms
  • We want to be (1 ?) wrt the best algorithm
  • Weighted Majority algorithm
  • Assign weights to each expert by how well it
    performs
  • Pick probabilistically according to weights
  • Applying this to list update
  • Each list configuration is an expert
  • Too many experts n!
  • Can we reduce computation?

12
Experts for two element list
  • List (x,y)
  • Experts (x,y) and (y,x)
  • weights wx, wy
  • Algorithm
  • Initialize wx, wy to rx ry ?R 1..1/?
  • If x accessed, wx lt- wx1 else wy lt- wy1
  • Always keep the element with higher weight in
    front

13
List Factoring Lemma
  • Under a certain condition
  • If A performs well on a list of two elements
  • it performs well on any arbitrary list
  • Condition
  • For an arbitrary list, given the same accesses,
  • A should order x and y just as in a list with
    only x y

14
List Factoring Lemma
  • e.g. Move-to-front

4 7 2 9
9
9 4 7 2
9 and 4 retain the same order LFL applies
15
Extending experts to a general list
  • Select ri ?R 1..1/? for element i
  • Initialize wi lt- ri
  • If ith element accessed, wi lt- wi1
  • Order elements in decreasing order of weight
  • (1?) static competitive

16
Combining Static Dynamic optimality
  • A has strong static optimality, B has dynamic
    optimality
  • Combine the two to get the best of both
  • Apply Experts again
  • Technical difficulties
  • Cannot estimate weights running both
    simultaneously defeats our purpose
  • Huge cost of switching between experts
  • Dont switch very often

17
How to estimate weights?
  • The Bandits approach
  • Run the (so far) better expert
  • Assume good behavior from the other
  • - Pessimistic approach
  • After a few runs, we have sampled each one
    sufficiently

18
Binary Search Trees
  • Unfortunately similar short-cuts do not work

19
Why is BST harder?
  • Decide which nodes should be near the root
  • Decide how to move up those nodes
  • Not straightforward
  • 132 different ways of bringing a node at depth 7
    to the root!!
  • We ignore the second issue
  • try for Dynamic search optimality

20
Outline of our approach
  • Design a probability distribution p over
    accesses
  • Low offline cost gt greater probability
  • Assume p reflects reality and predict the next
    access from it.
  • Construct tree based on conditional probability
    of next access
  • Low offline cost gt node closer to root gt low
    online cost

21
An observation about offline BSTs
  • An access sequence with offline cost k can be
    expressed in 12k bits
  • At most 212k sequences of offline cost k.

22
An observation about offline BSTs
  • An access sequence with offline cost k can be
    expressed in 12k bits
  • Start with a fixed tree
  • Express rotations from one tree to another using
    6 bits per rotation
  • Assume algorithm first brings accessed element to
    root extra factor of 2
  • At most 212k sequences of offline cost k.

23
An observation about offline BSTs
  • An access sequence with offline cost k can be
    expressed in 12k bits
  • At most 212k sequences of offline cost k.

24
9
7
7
12
4
12
4
14
9
14
5
2
15
5
2
15
(left,up,right,up)
  • Uniquely specifies rotations

25
An observation about offline BSTs
  • An access sequence with offline cost k can be
    expressed in 12k bits
  • Start with a fixed tree
  • Express rotations from one tree to another using
    6 bits per rotation
  • Assume algorithm first brings accessed element to
    root extra factor of 2
  • At most 212k sequences of offline cost k.

26
Probability distribution on accesses
  • ? Distribution on accesses a
  • p(a) 2-13k where k offline cost of a
  • Use this to calculate probability of next access
  • Run through all offline algs, obtain cost, take a
    weighted average
  • Much like experts in flavor

27
Probability distribution on accesses
  • ? Distribution on accesses a
  • p(a) 2-13k where k offline cost of a
  • Use this to calculate probability of next access
  • Caveat
  • computationally infeasible
  • But dynamic search optimal !

28
What next?
  • Can we make this algorithm computationally
    feasible?
  • True dynamic optimality, strong static optimality
    for BST
  • Lessons to take home
  • Experts analysis is a useful tool for data
    structures
  • Generic algorithm too slow

29
Outline of our approach
  • Design a probability distribution p over
    accesses
  • Low offline cost gt greater probability
  • Assume p reflects reality and predict the next
    access from it.
  • Construct tree based on conditional probability
    of next access
  • Low offline cost gt node closer to root gt low
    online cost
Write a Comment
User Comments (0)
About PowerShow.com