Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency - PowerPoint PPT Presentation

About This Presentation
Title:

Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency

Description:

Title: Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency Author: Anany Levitin Last modified by: E_man Created Date: 8/23/1999 5:38:43 PM – PowerPoint PPT presentation

Number of Views:304
Avg rating:3.0/5.0
Slides: 34
Provided by: AnanyL3
Category:

less

Transcript and Presenter's Notes

Title: Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency


1
Dynamic Programming Dr. M. Sakalli, Marmara
University Matrix Chain Problem
Assembly-line scheduling Elements of dynamic
programming
Picture reference to http//www.flickr.com/photos/
7271221_at_N04/3408234040/sizes/l/in/photostream/.
Crane strokes
2
Dynamic Programming (pro-gram)
  • Like Divide and Conquer, DP solves a problem by
    partitioning the problem into sub-problems and
    combines solutions. The differences are that
  • DC is top-down while DP is bottom to top
    approach. But memoization will allow top to down
    method.
  • The sub-problems are independent from each other
    in the former case, while they are not
    independent in the dynamic programming.
  • Therefore, DP algorithm solves every sub-problem
    just ONCE and saves its answer in a TABLE and
    then reuse it. Memoization.
  • Optimization problems Many solutions possible
    solutions and each has a value.
  • A solution with the optimal sub-solutions. Such a
    solution is called an optimal solution to the
    problem. Not the optimum. Shortest path example
  • The development of a dp algorithm can be in four
    steps.
  • Characterize the structure of an optimal
    solution.
  • Recursively define the value of an optimal
    solution.
  • Compute the value of an optimal solution in a
    bottom-up fashion.
  • Construct an optimal solution from computed
    information.

Dynamic Programming
DivideConquer
3
Assembly-Line Scheduling
  • ei time to enter
  • xi time to exit assembly lines
  • tj time to transfer from assembly
  • aj processing time in each station.
  • Brute-force approach
  • Enumerate all possible sequences through lines i
    ?1, 2,
  • For each sequence of n stations Sj j ?1, n,
    compute the passing time. (the computation takes
    ?(n) time.)
  • Record the sequence with smaller passing time.
  • However, there are too many possible sequences
    totaling 2n

4
Assembly-Line Scheduling
  • DP Step 1 Analyze the structure of the fastest
    way through the paths
  • Seeking an optimal substructure The fastest
    possible way (minf) through a station Si,j
    contains the fastest way from start to the
    station Si,j trough either assembly line S1, j-1
    or S2, j-1.
  • For j1, there is only one possibility
  • For j2,3,,n, two possibilities from S1, j-1 or
    S2, j-1
  • from S1, j-1, additional time a1, j
  • from S2, j-1, additional time t2, j-1 a1,j
  • suppose the fastest way through S1, j is through
    S1, j-1, then the chassis must have taken a
    fastest way from starting point through S1,j-1.
    Why???
  • Similar rendering for S2, j-1.
  • An optimal solution to a problem contains within
    it an optimal solution to sub-prbls.
  • the fastest way through station Si,j contains
    within it the fastest way through station S1,j-1
    or S2,j-1 .
  • Thus can construct an optimal solution to a
    problem from the optimal solutions to
    sub-problems.

5
  • DP Step 2 A recursive solution
  • DP Step 3 Computing the fastest times in T(n)
    time.
  • Problem ri (j) 2n-j. So f11 is referred to
    2n-1 times.
  • Total references to all fij is ?(2n).

6
Running time O(n).
  • Step 4 Construct the fastest way through the
    factory

7
  • Determining the fastest way through the factory

8
Matrix-chain Multiplication
  • Problem definition Given a chain of matrices A1,
    A2, ..., An, where matrix Ai has dimension
    pi-1pi, find the order of matrix multiplications
    minimizing the number of the total scalar
    multiplications to compute the final product.
  • Let A be a p, q matrix, and B be a q, r
    matrix. Then the complexity is pqr.
  • In the matrix-chain multiplication problem, the
    actually matrices are not multiplied, the aim is
    to determine an order for multiplying matrices
    that has the lowest cost.
  • Then, the time invested in determining optimal
    order must worth more than paid for by the time
    saved later on when actually performing the
    matrix multiplications.
  • C(p,r) A(p,q) B(q,r)
  • for i?1 to p for j?1 to r
  • Ci,j0
  • for i1 to p
  • for j1 to r
  • for k1 to q Ci,j Ci,j Ai,kBk,j

9
Example given in class
2x3 3x5 5x7 7x2 A1 A2
A3 A4
Suppose we want to multiply a sequence of
matrices, A1A4 with dimensions. Remember
Matrix multiplication is not commutative. 1-
Total of multiplication for this method is 30
70 20 120 2- Above the total of
multiplications is 30 70 28 128 3- Below
the total of multiplications is 70 30 12
112
10
Parenthesization
  • The aim as to fully parenthesize the product of
    matrices minimizing scalar multiplications.
  • For example, for the product A1 A2 A3 A4, a fully
    parenthesization is ((A1 A2) A3) A4.
  • A product of matrices is fully parenthesized if
    it is either a single matrix, or a product of two
    fully parenthesized matrix product, surrounded by
    parentheses.
  • Brute-force approach
  • Enumerate all possible parenthesizations.
  • Compute the number of scalar multiplications of
    each parenthesization.
  • Select the parenthesization needing the least
    number of scalar multiplications.
  • The number of enumerated parenthesizations of a
    product of n matrices, denoted by P(n), is the
    sequence of Catalan number growing as O(4n/n3/2)
    and solution to recurrence is O(2n).

The Brute-force approach is inefficient.
11
Catalan numbers the number of ways in which
parentheses can be placed in a sequence of
numbers to be multiplied, two at a time
  • 3 numbers
  • (1 (2 3)), ((1 2) 3)
  • 4 numbers
  • (1 (2 (3 4))), (1 ((2 3) 4)), ((1 2) (3 4)), ((1
    (2 3)) 4), (((1 2) 3) 4)
  • 5 numbers
  • (1 (2 (3 (4 5)))), (1 (2 ((3 4) 5))), (1 ((2 3)
    (4 5))), (1 ((2 (3 4)) 5)),
  • (1 (((2 3) 4) 5)), ((1 2) (3 (4 5))), ((1 2) ((3
    4) 5)), ((1 (2 3)) (4 5)),
  • ((1 (2 (3 4))) 5), ((1 ((2 3) 4)) 5), (((1 2) 3)
    (4 5)), (((1 2) (3 4)) 5),
  • (((1 (2 3)) 4) 5) ((((1 2) 3) 4) 5)

12
With DP
  • DP Step 1 structure of an optimal
    parenthesization
  • Let Ai..j (i?j) denote the matrix resulting from
    Ai?Ai1??Aj
  • Any parenthesization of Ai?Ai1??Aj must split
    the product between Ak and Ak1 for some k,
    (i?kltj). The cost of computing Ai..k of
    computing Ak1..j Ai..k ? Ak1..j.
  • If k is the position for an optimal
    parenthesization, the parenthesization of
    prefix subchain Ai?Ai1??Ak within this
    optimal parenthesization of Ai?Ai1??Aj must be
    an optimal parenthesization of Ai?Ai1??Ak.
  • Ai?Ai1??Ak ? Ak1??Aj

13
Step 2 Recursively define the value of an
optimal solution
  • DP Step 2 a recursive relation
  • Let mi,j be the minimum number of
    multiplications needed to compute the matrix
    Ai?Ai1??Aj
  • The lowest cost to compute A1 A2 An would be
    m1,n
  • Recurrence
  • 0 if i j
  • mi,j
  • mini?kltj mi,k
    mk1,j pi-1pkpj if iltj
  • ( (Ai Ak) (Ak1 Aj) )
    (Split at k)

14
Recursive (top-down) solution
  • using the formula for mi,j
  • RECURSIVE-MATRIX-CHAIN(p, i, j)
  • if ij then return 0
  • mi, j ?
  • for k ? 1 to j - 1
  • q ? RECURSIVE-MATRIX-CHAIN (p, i , k)
  • RECURSIVE-MATRIX-CHAIN (p, k1 , j)
  • pi-1 pk pj
  • if q lt mi, j then mi, j ? q
  • return mi, j

Line 1
Line 6
Line 4
Line 5
Line 6
Complexity
for n gt 1
15
Complexity of the recursive solution
  • Using inductive method to prove by using the
    substitution method we guess a solution and
    then prove by using mathematical induction that
    it is correct.
  • Prove that T(n) ?(2n) that is T(n) 2n-1 for
    all n 1.
  • Induction Base T(1) 120
  • Induction Assumption assume T(k) 2k-1 for all
    1 k lt n
  • Induction Step

16
  • Step 3, Computing the optimal cost
  • If by recursive algorithm is exponential in n,
    ?(2n), no better than brute-force.
  • But only have n ?(n2)
    subproblems.
  • Recursive behavior will encounter to revisit the
    same overlapping subproblems many times.
  • If tabling the answers for subproblems, each
    subproblem is only solved once.
  • The second hallmark of DP overlapping
    subproblems and solve every subproblem just once.

17
  • Step 3 Compute the value of an optimal solution
    bottom-up
  • Input n an array p0n containing matrix
    dimensions
  • State m1..n, 1..n for storing mi, j
  • s1..n, 1..n for storing the optimal k that
    was used to calculate mi, j
  • Result Minimum-cost table m and split table s
  • MATRIX-CHAIN-TABLE(p, n)
  • for i ? 1 to n
  • mi, i ? 0
  • for l ? 2 to n
  • for i ? 1 to n-l1
  • j ? il-1
  • mi, j ? ?
  • for k ? i to j-1
  • q ? mi, k mk1, j pi-1 pk pj
  • if q lt mi, j
  • mi, j ? q
  • si, j ? k
  • return m and s

Takes O(n3) time Requires ?(n2) space
18
chains of length 1
A1 301
A2 140
A3 4010
A4 1025
j
1 2 3 4
1 0
2 0
3 0
4 0
i
19
chains of length 2
j
A1 301
A2 140
A3 4010
A4 1025
1 2 3 4
1 0 1200 1
2 0 400 2
3 0 10000 3
4 0
i
20
chains of length 3
j
1 2 3 4
1 0 1200 1 700 1
2 0 400 2 650 3
3 0 10000 3
4 0
i
A1 301
A2 140
A3 4010
A4 1025
21
chains of length 4
j
1 2 3 4
1 0 1200 1 700 1 1400 1
2 0 400 2 650 3
3 0 10000 3
4 0
i
A1 301
A2 140
A3 4010
A4 1025
22
Printing the solution
j
2 3 4
1 1 1 1
2 2 3
3 3
A1 301
A2 140
A3 4010
A4 1025
i
PRINT(s, 1, 4)
PRINT (s, 1, 1)
PRINT (s, 2, 4)
PRINT (s, 2, 3)
PRINT (s, 4, 4)
Output (A1((A2A3)A4))
PRINT (s, 2, 2)
PRINT (s, 3, 3)
23
  • Step 4 Constructing an optimal solution
  • Each entry si, j k shows where to split the
    product Ai Ai1 Aj for the minimum cost
  • A1 An ( (A1 Asi, n) (Asi, n1 An) )
  • To print the solution invoke the following
    function with (s, 1, n) as the parameter
  • PRINT-OPTIMAL-PARENS(s, i, j)
  • if ij then print Ai
  • else print (
  • PRINT-OPTIMAL-PARENS(s, i, si,
    j)
  • PRINT-OPTIMAL-PARENS(s, si,
    j1, j)
  • print )

24
  • Suppose
  • A1 A2 Ai.Ar
  • P1xP2 P2xP3 PixPi1 . PrxPr1
  • Assume
  • mij the of multiplication needed to multiply
    Ai A i1......Aj
  • Initial value mii mjj 0
  • Final Value m1r
  • A1..AiAk Ak1..Aj
  • mij mik mk1, j Pi Pk1 Pj1
  • k could be i lt k lt j-1
  • We know the range of k but dont know the exact
    value of k

25
  • Thus mij min(mik mk1, j Pi Pk1 Pj1) for
    i lt k lt j-1
  • recurrence index (j-1) - i 1(j-i)
  • Example Calculate m14 for
  • A1 A2 A3 A4
  • 2x5 5x3 3x7 7x2
  • P1 2, P2 5, P3 3, P4 7, P5 2
  • j-i 0 j-i 1 j-i 2
    j-i 3
  • m11 0 m12 30 m13 72 m14
    84
  • m220 m23105 m2472
  • m330 m3442
    m440
  • mij min(mik mk1,i PiPk1Pj1) for i lt k
    lt j-1
  • m12 min(m11 m22 P1P2P3) for 1 lt k lt 1
  • min( 0 0 2x5x3)

m(1,1) m(1,2) m(1,3) m(1,4) m(1,5) m(1,6)
m(2,2) m(2,3) m(2,4) m(2,5) m(2,6)
m(3,3) m(3,4) m(3,5)
m(3,6) m(4,4)
m(4,5) m(4,6)
m(5,5) m(5,6)
m(6,6)
26
  • mij min(mik mk1, j PiPk1Pj1) for ilt klt
    j-1
  • m13 min(m1k mk1,3 P1Pk1P4)for 1ltklt2
  • min(m11 m23 P1P2P4 , m12 m33 P1P3P4
    )
  • min( 01052x5x7, 30 0 2x3x7)
  • min( 10570, 30 42) 72
  • m24 min(m2k mk1,3 P2Pk1P4)for 2ltklt3
  • min(m22 m34 P2P3P5 , m23 m44 P2P4P5
    )
  • min( 0425x3x2, 105 0 5x7x2)
  • min( 42 30, .) 72
  • m14 min(m1k mk1,4 P1Pk1P5)for 1ltklt3
  • min(m11 m24 P1P2P5 , m12 m34 P1P3P5 ,
    m13 m44 P1P4P5 )
  • min( 722x5x2, 3042 2x3x2, 722x7x2)
  • min( 7220, 304212, 72 28) min( 92, 84,
    100) 84

27
(No Transcript)
28
Memoized Matrix Chain
  • LOOKUP-CHAIN(p,i,j)
  • if mi,jlt? then return mi,j
  • if ij then mi,j ?0
  • else for k?i to j-1
  • do q? LOOKUP-CHAIN(p,i,k)

  • LOOKUP-CHAIN(p,k1,j)pi-1pkpj
  • if qlt
    mi,j then mi,j ?q
  • return mi,j

29
For a DP to be applicable an optmztn prbl must
have
  • Optimal substructure
  • An optimal solution to the problem contains
    within it optimal solutions to subproblems.
  • Overlapping (dependencies) subproblems
  • The space of subproblems must be small i.e., the
    same subproblems are encountered over and over.

DP step3. Memoization T(n)O(n3),
PSpace(n)?(n2)
  • A top-down variation of dynamic programming
  • Idea remember the solution to subproblems as
    they are solved in the simple recursive algorithm
    but may be quite costly
  • DP is considered better when all subproblems must
    be calculated, because there is no overhead for
    recursion. Lookup-Chain(p, i, j)
  • LookUp-Table(p,i,j)
  • Initialize all mi,j to ?
  • if mi,j lt ? then return mi,j
  • if i j then mi,j ? 0
  • else for k ? 1 to j - 1
  • q ? LookUp-Table (p, i , k)
  • LookUp-Table (p, k1 ,
    j) pi-1 pk pj
  • if q lt mi, j then mi, j ? q
  • return mi, j

30
Elements of DP
  • Optimal substructure
  • A problem exhibits optimal substructure if an
    optimal solution to the problem contains within
    its optimal solution to subproblems.
  • Overlapping subproblems
  • When a recursive algorithm revisits the same
    problem over and over again, that is the
    optimization problem has overlapping subproblems.
  • Subtleties
  • Better to not assume that optimal substructure
    applies in general. Two examples in a directed
    graph G (V, E) and vertices u, v ? V.
  • Unweighted shortest path
  • Find a path from u to v consisting of the fewest
    edges. Good for Dynamic programming.
  • Unweighted longest simple path
  • Find a simple path from u to v consisting of the
    most edges. Not good for Dynamic programming.

31
  • The running time of a dynamic-programming
    algorithm depends on the product of two factors.
  • The number of subproblems overall the number
    of choices for each subproblem. Sum of entire
    choices.
  • Assembly line scheduling
  • T(n) subproblems 2 choices T(n)
  • Matrix chain multiplication
  • T(n2) subproblems (n-1) choices O(n3)

32
Principle of Optimality (Optimal Substructure)
  • The principle of optimality applies to a problem
    (not an algorithm)
  • A large number of optimization problems satisfy
    this principle.
  • Principle of optimality Given an optimal
    sequence of decisions or choices, each
    subsequence must also be optimal.
  • Principle of optimality - shortest path problem
  • Problem Given a graph G and vertices s and t,
    find a shortest path in G from s to t
  • Theorem A subpath P (from s to t) of a
    shortest path P is a shortest path from s to t
    of the subgraph G induced by P. Subpaths are
    paths that start or end at an intermediate vertex
    of P.
  • Proof If P was not a shortest path from s to
    t in G, we can substitute the subpath from s
    to t in P, by the shortest path in G from s
    to t. The result is a shorter path from s to t
    than P. This contradicts our assumption that P
    is a shortest path from s to t.

33
Principle of Optimality
  • P must be a shortest path from c to e in G,
    otherwise P cannot be a shortest path from a to e
    in G.
  • Problem What is the longest simple route
    between City A and B?
  • Simple never visit the same spot twice.
  • The longest simple route (solid line) has city
    C as an intermediate city.
  • Does not consist of the longest simple route
    from A to C and the longest simple route from C
    to B. Therefore does not satisfy the Principle of
    Optimality
Write a Comment
User Comments (0)
About PowerShow.com