CS 6212 DESIGN AND
ANALYSIS OF
ALGORITHMS
LECTURE: DYNAMIC PROGRAMMING
– PART I
Instructor: Abdou Youssef
CS 6212 Design and Analysis of Algorithms Dynamic Programming
1
OBJECTIVES OF THIS LECTURE (PART B)
By the end of Part B of this lecture, you will be able to:
• Describe how Dynamic Programming (DP) works, including its
major design steps
• State and explain the principle of optimality
• Prove the principle of optimality holds in some problems, but not
in others
• Begin to apply DP to derive powerful optimization algorithms
• Compare and contrast DP with Divide & Conquer and with the
Greedy Method
CS 6212 Design and Analysis of Algorithms Dynamic Programming
2
OUTLINE (OF PART B)
• Perspective
• DP as optimization
• DP vs. Greedy
• DP vs. Divide & Conquer
• Principle of Optimality
• Steps of Dynamic Programming
• First application: The Matrix Chain Problem
CS 6212 Design and Analysis of Algorithms Dynamic Programming
3
PERSPECTIVE
-- OPTIMIZATION --
• Dynamic programming is an optimization technique
• Recall what that is
• What is an optimization problem?
• What is an optimization algorithm?
CS 6212 Design and Analysis of Algorithms Dynamic Programming
4
PERSPECTIVE
-- DYNAMIC PROGRAMMING VS. GREEDY METHOD --
• Both techniques are optimization techniques
• Both build solutions from a collection of choices of individual elements.
• The greedy method
• Computes its solution
• by making its choices in a serial forward fashion,
• never looking back, and never revising previous choices
• There is no litmus test to quickly pre-check if the greedy solution will be optimal
• Dynamic programming
• computes its solution bottom up by
• synthesizing them from smaller subsolutions, and
• by trying many possibilities and choices before it arrives at the optimal set of choices
• There is a litmus test for DP, called the Principle of Optimality, which can be
applied relatively easily and quickly to tell if DP gives an optimal solution
CS 6212 Design and Analysis of Algorithms Dynamic Programming
5
PERSPECTIVE
-- DYNAMIC PROGRAMMING VS. DIVIDE & CONQUER --
• Both techniques split their input into parts, find subsolutions to the
parts, and synthesize larger solutions from smaller ones
• Divide and Conquer splits its input at a few pre-specified
deterministic points (e.g., always in the middle)
• Dynamic Programming
• Splits its input at every possible split points rather than at a few
pre-specified points
• After trying all split points, it determines which split point is optimal
• DP is mainly for optimization, while D&C is mainly for non-
optimization problems (there are exceptions)
CS 6212 Design and Analysis of Algorithms Dynamic Programming
6
PRINCIPLE OF OPTIMALITY (POO)
• Definition: A problem is said to satisfy the Principle of
Optimality if the subsolutions of any optimal solution of
the problem are optimal solutions of their subproblems
CS 6212 Design and Analysis of Algorithms Dynamic Programming
7
PRINCIPLE OF OPTIMALITY (POO)
-- AN EXAMPLE WHERE POO HOLDS --
• The shortest path problem satisfies the POO
• Statement of the POO: if , 1, 2, … , , is a
shortest path from node to node in a
graph, then the portion of to on that path
is a shortest path from to , for any two
intermediary nodes and
• POO holds: Proof by contradiction. If a
portion is not shortest, then there is a shorter
alternative (“a short cut”) between and .
Replace that portion by the short cut, we get
a shorter path from to , contradicting that
the original path was a shortest path.
• Red path: shortest from 1 to 9
1, 2, 7, 5, 3, 9
• The portion from 2 to 3 (2, 7, 5, 3) is a
shortest path from 2 to 3
CS 6212 Design and Analysis of Algorithms Dynamic Programming 8
8
2
5 6
43
7
1
9
3
2
10
15
10 11
8
11
1
5
3
5
10
6
PRINCIPLE OF OPTIMALITY (POO)
-- AN EXAMPLE WHERE POO DOESN’T HOLD --
• The longest path problem doesn’t satisfy the
POO
• Statement of the POO: Every sub-path of
longest simple path is a longest simple
path between its end-points.
• POO does not hold: Proof by a counter-
example. See the counter-example on the
right.
• Red path: Longest from 2 to 3
2, 1, 7, 8, 9, 6, 4
• The portion from 7 to 8 (of length
11) is not the longest simple path
from 7 to 8 because:
7, 1, 2, 3, 4, 6, 9, 8 is longer (57>11)
CS 6212 Design and Analysis of Algorithms Dynamic Programming 9
8
2
5 6
43
7
1
9
3
2
10
15
10 11
8
11
1
5
3
5
10
6
LESSONS LEARNED SO FAR
• DP is an optimization design technique
• It has a litmus test, called the principle of optimality, which
holds in some problems but not in all problems
• More lessons later
CS 6212 Design and Analysis of Algorithms Dynamic Programming
10
STEPS OF DYNAMIC PROGRAMMING
Dynamic programming design involves 4 major steps:
1. Notation: Develop a mathematical notation that can express
every solution and subsolution for the problem at hand
2. POO: Prove that the Principle of Optimality holds
3. Recurrence Relation (RR): Develop a recurrence relation that
relates a solution to its subsolutions, using the math notation of
step 1. Indicate what the initial values are for that recurrence
relation, and which term signifies the final solution.
4. Algorithm for the RR: Write an algorithm to compute the
recurrence relation bottom up
CS 6212 Design and Analysis of Algorithms Dynamic Programming
11
COMMENTS ABOUT THE STEPS OF DP
• The steps in the previous slide are not called a template.
• That is because the steps are guidelines rather than algorithmic
instructions
• Steps 1 and 2 need not be in that order
• Do what makes sense in each problem
• If you don’t need notation to state (and prove) the principle of
optimality, then do step 2 first
• Otherwise, do step 1 first
• Step 3 is the heart of the design process
• In high level algorithmic design situations, one can stop at step 3.
• In this course, however, we will carry out step 4 as well.
CS 6212 Design and Analysis of Algorithms Dynamic Programming
12
MORE COMMENTS ABOUT THE STEPS OF DP
• Without the Principle of Optimality, it won't be possible to
derive a sensible recurrence relation in step 3
• This will be seen when we start using POO a little later
• When the Principle of Optimality holds, the 4 steps of DP are
guaranteed to yield an optimal solution
• No separate proof of optimality is needed
• The proof that POO holds is sufficient!
CS 6212 Design and Analysis of Algorithms Dynamic Programming
13
FIRST APPLICATION OF DP
-- THE MATRIX CHAIN PROBLEM --
The matrix Chain Problem:
• Input: matrices 1, 2, … , , … , of dimensions
1 × 2,2 × 3, … , × +1, … , × +1
• Output: A sequence of pairings of the matrices in order to
multiply them in the least amount of time
• Task: Design a DP algorithm for solving this problem
CS 6212 Design and Analysis of Algorithms Dynamic Programming
14
THE MATRIX CHAIN PROBLEM
-- PRELIMINARIES--
• A matrix is a table of numbers, i.e., a 2-dimensional array
• A matrix = [1:, 1:] is referred to as a × matrix
• Its lines are called rows (so this has rows) labeled 1, 2, …
• The columns are labeled 1, 2, … ( has columns)
• The element [, ] in row and column is denoted
• Matrix multiplication
• Let be a × matrix, and be a × matrix
• The product is a × matrix where
• = 11 + 22 + ⋯ (inner product of row of and column of )
• Time of is multiplications and − 1 additions: simplify that to
• Times of computing all of : × × =
CS 6212 Design and Analysis of Algorithms Dynamic Programming
15
• ≠
• = ()
THE MATRIX CHAIN PROBLEM
-- EXAMPLE--
• Suppose we have 3 matrices 1, 2 , 3 of dimensions 6 × 3, 3 × 4, 4 × 5
• Two ways to multiply 123 (both give the same answer):(12)3 or 1(23)
CS 6212 Design and Analysis of Algorithms Dynamic Programming
16
1 2
36 × 4
6 × 5 Time:
= 6 × 3 × 4 + 6 × 4 × 5
= 192
1
2 3
3 × 5
6 × 5 Time: = 3 × 4 × 5 + 6 × 3 × 5
= 150
The second way is better
THE MATRIX CHAIN PROBLEM
-- (1) NOTATION AND STATEMENT OF THE POO --
• Input: 1,2, … , where is of dimension × +1
• Every way of multiplying a sequence of matrices can be represented by a binary (infix)
tree, where the leaves are the matrices, and the internal nodes are intermediary
products (as was illustrated on the previous slide)
• Let be the optimal tree of multiplying × ⋯×
• Let L be its left subtree, R be its right subtree
• POO: Every subtree of an optimal tree is optimal??
• POO: L and R are optimal
• Proof of POO: prove L and R are optimal
CS 6212 Design and Analysis of Algorithms Dynamic Programming
17
…
for some k
+1 …
p × +1 p+1 × +1p × +1
L R
THE MATRIX CHAIN PROBLEM
-- (2) PROOF OF THE POO --
• Let T the time to multiply the matrices according to tree T
• Thus: = + + ++
• Need to prove (and R) is optimal
• By contradiction. If is not optimal,
there is a faster tree ′ of multiplying its
matrices … , i.e., ′ < ()
• Take the tree ′ made of ′, , and the same root.
• ′ = ′ + + +1+1<
+ + +1+1 = ()
• ′ < ()
• This implies that ′ is better than optimal. Contradiction. Q.E.D.
CS 6212 Design and Analysis of Algorithms Dynamic Programming
18
…
for some k
+1 …
p × +1 p+1 × +1p × +1
L’ R
…
for some k
+1 …
p × +1 p+1 × +1p × +1
L R
THE MATRIX CHAIN PROBLEM
-- MORE ON NOTATION AND VISUALS--
• Now that L and R are optimal, they should be denoted and
+1,, respectively.
• Visually, the optimal tree is:
• Let =
• When = , is a single node (single matrix ) with no
multiplication, so = 0 for all
• The optimal cost of multiplying 12 … is 1, to be determined
CS 6212 Design and Analysis of Algorithms Dynamic Programming
19
…
for some k
+1 …
p × +1 p+1 × +1p × +1
+1,
THE MATRIX CHAIN PROBLEM
-- (3) RECURRENCE RELATION--
• We saw before that = + + +1+1
• Therefore, = + +1, + +1+1
• Using the M notation: = + +1, + +1+1
• But what is ?
• It is a split point
• We don’t know it, but we know that ≤ ≤ − 1 and that it is the best split point in
that it should be the split point that gives us the minimum cost
• So to find the best , try all its possible values and take the best
• Therefore, the RR is:
CS 6212 Design and Analysis of Algorithms Dynamic Programming
20
= ≤≤−(++, + ++)
THE MATRIX CHAIN PROBLEM
-- (4) RECURRENCE RELATION IMPLEMENTATION --
• Now one needs to write an algorithm for computing all the
for all 1 ≤ ≤ ≤
• To be able to construct the actual optimal solution, we need to
record for each the that produces its minimum
• Since the matrix chain problem is a “toy” problem, we will not
produce the actual algorithm
• But we will carry out an example
CS 6212 Design and Analysis of Algorithms Dynamic Programming
21
DP FOR THE MATRIX CHAIN PROBLEM
-- RUNNING AN EXAMPLE: COMPUTING THE RR AS A TABLE --
• Take = 4,1is 3 × 5,2 is 5 × 7,3 is 7 × 3, and 4 3 × 4, that
is, 1 = 3,2 = 5,3 = 7,4 = 3,5 = 4
• We will build a table to compute the ’s bottom up. For each
, we record the that gives its min value
CS 6212 Design and Analysis of Algorithms Dynamic Programming
22
M11=0 M22=0 M33=0 M44=0
M12=105 M23=105 M34=84
M13=150
k=1
M24=165
k=3
M14=186
k=3
• For the 2nd row, every ,+1 = +1+2 (why?),
so compute it quickly
• For the 3rd row, let’s illustrate the computing of
13:
13 = min1≤≤3−1(1++1,3 + 1+14)
13 = min 11 + 23 + 124,12 + 33 + 134
13 = min 0 + 105 + 3 × 5 × 3, 105 + 0 + 3 × 7 × 3
13 = min 150, 168 = 150, from the 1st k, i.e., k=1
RUNNING AN EXAMPLE
-- CONSTRUCTING THE ACTUAL OPTIMAL SOLUTION --
• 1is 3 × 5,2 is 5 × 7,3 is 7 × 3, and 4 3 × 4,
1 = 3,2 = 5,3 = 7,4 = 3,5 = 4
• Use the table to construct the optimal solution
CS 6212 Design and Analysis of Algorithms Dynamic Programming
23
M11=0 M22=0 M33=0 M44=0
M12=105 M23=105 M34=84
M13=150
k=1
M24=165
k=3
M14=186
k=3
Construction of the actual optimal solution:
• Since for 14 the optimal k=3, the best
splitting for 1234 is (123)4
• Now, to find the best split of 123, look for
the best k that minimizes 13: it is k=1
• Therefore, the best splitting of 123 is
1(23)
• Hence, the optimal solution is: (1(23))4
LESSONS LEARNED SO FAR
• DP is an optimization design technique
• It has a litmus test (the principle of optimality): DP applies if OOP holds
• Proof of the POO is almost always by contradiction:
1. assume a subsolution is not optimal, so there is a better subsolution
2. replace it by a better one
3. This results in a new solution better than optimal, a contradiction
• Solving the recurrence relation is non-recursive, bottom up, from smallest
subsolutions to the final solution, where the subsolutions are usually recorded
by filling a table
• The actual optimal solution is constructed by using the optimal split points
recorded in the table
• More lessons later
CS 6212 Design and Analysis of Algorithms Dynamic Programming
24
MORE TO COME
• Next lecture we complete our coverage of Dynamic
programming
• Specifically, we use DP to solve two very serious problems:
• The all-pairs shortest path problem
• The optimal binary search tree problem
CS 6212 Design and Analysis of Algorithms Dynamic Programming
25
学霸联盟