程序代写案例-CS 6212
时间:2021-11-18
CS 6212 DESIGN AND
ANALYSIS OF
ALGORITHMS
LECTURE: DYNAMIC PROGRAMMING
– PART II
Instructor: Abdou Youssef
CS 6212 Design and Analysis of Algorithms Dynamic Programming
1
WHAT YOU LEARNED LAST LECTURE A/B DP
Last lecture, you learned:
• How Dynamic Programming (DP) works, including its major
design steps
• The principle of optimality, and how to state in a given problem
• How to prove the principle of optimality (by contradiction)
• How to apply DP to solve the Matrix Chain Problem, especially the
derivation of a recurrence relation, and computing it in a table
• How to use the optimal splits recorded in the table to construct the
actual optimal solution
CS 6212 Design and Analysis of Algorithms Dynamic Programming
2
OBJECTIVES OF THIS LECTURE
By the end of this lecture, you will be able to:
• Apply Dynamic Programming (DP) more firmly
• Derive a well-known DP algorithm for the all-powerful all-pairs
shortest-paths problem
• Formulate an optimal binary search tree approach to a practical
problem of searching for items of different access-frequencies
• Develop a DP algorithm for the Optimal Binary Search Tree
problem
• Conduct time complexity analysis on DP algorithms
CS 6212 Design and Analysis of Algorithms Dynamic Programming
3
OUTLINE
• Second application of DP: the all-pairs shortest-paths problem
• Formulation of searching for items of different access-
frequencies, as an optimal-BST problem
• Third application of DP: the OBST problem
CS 6212 Design and Analysis of Algorithms Dynamic Programming
4
THE ALL-PAIRS SHORTEST PATH PROBLEM
Problem statement:
• Input: A weighted graph G, represented by its weight matrix
[1:, 1:], where = number of nodes in G.
• , = 0 ∀,
• , = ∞ if there is no real edge between node and node ;
• Otherwise, , is a real number.
• Output: A distance matrix [1:, 1:], where [, ] is the distance
from node to node
• Task: Develop a DP algorithm for solving this problem
CS 6212 Design and Analysis of Algorithms Dynamic Programming
5
APPLYING DP TO THE APSP PROBLEM
-- (1) NOTATION (1/2) --
• Definition: A -special path from node to node is a path from to
where the label of every intermediary node is ≤
• 2-special paths from 1 to 7
• 1, 2, 7: Y/N?
• 1, 7: Y/N?
• 1, 2, 3, 5, 7: Y/N?
• 2-special path from 3 to 7:
• 3, 7: Y/N?
• 3, 2, 7: Y/N?
• 5-special paths from 3 to 7: [3, 7]; [3, 2, 7]; [3, 5, 7]; [3, 2, 1, 7]
CS 6212 Design and Analysis of Algorithms Dynamic Programming
6
8
2
5 6
43
7
1
9
3
2
10
15
10 11
8
11
1
5
3
5
10
6NO. Why?
Yes
Yes
Yes, of length=∞
Yes, of length=13
APPLYING DP TO THE APSP PROBLEM
-- (1) NOTATION (2/2) --
• For all nodes and , and all positive integers = 1, 2, … ,, let
• , = length of the shortest -special path from node to node
• Initial values: 0 , =? ?
• Note that any 0-special path from node to node is the single
edge (, ) because when =0, no intermediate node can have a
label ≤ 0 since all the labels are ≥ 1
• Therefore, 0 , = [, ]
• For what values of is , the distance from to ?
• = . Why?
CS 6212 Design and Analysis of Algorithms Dynamic Programming
7
APPLYING DP TO THE APSP PROBLEM
-- (2) PRINCIPLE OF OPTIMALITY--
• We already saw last lecture that any sub-path of a shortest
path is a shortest path between its end nodes
CS 6212 Design and Analysis of Algorithms Dynamic Programming
8
APPLYING DP TO THE APSP PROBLEM
-- (3) RECURRENCE RELATION --
• Divide the -special paths from node to node into two groups:
• Group A: Those -special paths that do not go through node
• Group B: Those -special paths that go through node
• , =min(length of the -special paths from node to node )
• , =min( min(length of -special paths in Group A),
min(length of -special paths in Group B))
• Group A: if a -special path doesn’t go through node , then every
intermediary node on that path is ≤ − 1, and thus the path is a ( −1)-special path
• Therefore: min(length of -special paths in Group A) = min( length
of ( − 1)-special paths from node to node )= −1 ,
CS 6212 Design and Analysis of Algorithms Dynamic Programming
9
APPLYING DP TO THE APSP PROBLEM
-- (3) RECURRENCE RELATION (CONTINUED)--
• Group B: -special paths from node to node that go through node
• The shortest -special path P in Group B goes from node to node , then from node to node .
• Call the first portion path Q
• Call the second portion path R
• min(length of -special paths in Group B) = length(P) = length(Q)+length(R)
• No intermediary node of Q can be node because otherwise path P goes through node multiple
times, and that makes P having cycles and thus not the shortest in its group, contradicting the choice
of P as the shortest in Group B
• Therefore, every intermediary node of Q is ≤ − 1
• Thus, Q is a ( − 1)-special path from node to node
• ∴ Q must be the shortest ( −1)-special path from node to node (by POO), and can be so proved
by contradiction
• ∴ length(Q)= −1 ,
• We can prove similarly that: length(R)= −1 ,
• min(length of -special paths in Group B) = length(Q)+length(R) = −1 , + −1 ,
CS 6212 Design and Analysis of Algorithms Dynamic Programming
10
Q
R
i
jk
P
APPLYING DP TO THE APSP PROBLEM
-- (3) RECURRENCE RELATION (CONTINUED)--
• Recap:
• min( length of -special paths in Group A) = −1 ,
• min(length of -special paths in Group B) = −1 , + −1 ,
• , =min( min(length of -special paths in Group A),
min(length of -special paths in Group B))
=min( −1 , , −1 , + −1 , )
• So, the recurrence relation is:
CS 6212 Design and Analysis of Algorithms Dynamic Programming
11
• , = min( − , , − , + − , ),
where the recurrence index is
• , = [, ]
THE DP APSP ALGORITHM
-- ALSO KNOWN AS FLOYD-WARSHALL ALGORITHM --
CS 6212 Design and Analysis of Algorithms Dynamic Programming
12
Procedure APSP(input: W[1:n,1:n]; output: A[1:n,1:n])
begin
for i=1 to n do
for j=1 to n do
, = [, ];
endfor
endfor
for k=1 to n do
for i=1 to n do
for j=1 to n do
, = min( − , , − , + − , );
endfor
endfor
endfor
end APSP
THE DP APSP ALGORITHM
-- SPACE COMPLEXITY --
CS 6212 Design and Analysis of Algorithms Dynamic Programming
13
Procedure APSP(input: W[1:n,1:n]; output: A[1:n,1:n])
begin
for i=1 to n do
for j=1 to n do
, = [, ];
endfor
endfor
for k=1 to n do
for i=1 to n do
for j=1 to n do
, = min( − , , − , + − , );
endfor
endfor
endfor
end APSP
Space Complexity Analysis:
• , must be written as
, , , and so A must be anarray A[1: n, 1: n, 1: n]
• This takes 3 memory.
• That is too costly!!
• Can we do better?
THE DP APSP ALGORITHM
-- REDUCED SPACE COMPLEXITY (1) --
• Note that once the matrix has been computed, there
is no need for matrix −
• Therefore, we don't need to keep the superscript
• By dropping it, the algorithm remains correct, and we
save on space
• The algorithm becomes as presented next
CS 6212 Design and Analysis of Algorithms Dynamic Programming
14
THE DP APSP ALGORITHM
-- REDUCED SPACE COMPLEXITY (2) --
CS 6212 Design and Analysis of Algorithms Dynamic Programming
15
Procedure APSP(input: W[1:n,1:n]; output: A[1:n,1:n])
begin
for i=1 to n do
for j=1 to n do
, = [, ];
endfor
endfor
for k=1 to n do
for i=1 to n do
for j=1 to n do
, = min( , , , + , );
endfor
endfor
endfor
end APSP
if( , + , < , ) then
, = , + , ;
THE DP APSP ALGORITHM
-- TIME & SPACE COMPLEXITY --
CS 6212 Design and Analysis of Algorithms Dynamic Programming
16
Procedure APSP(input: W[1:n,1:n]; output: A[1:n,1:n])
begin
for i=1 to n do
for j=1 to n do
, = [, ];
endfor
endfor
for k=1 to n do
for i=1 to n do
for j=1 to n do
, = min( , , , + , );
endfor
endfor
endfor
end APSP
Time: (2)
Time: (3)
• Overall Time: (3)
• Overall Space: (2)
because we only
need array A[1:n,1:n]
THE DP APSP ALGORITHM
-- EXERCISES--
• Exercise 1: Show that dropping the superscript k does
not affect the correctness of the algorithm
• Exercise 2: Modify the ASAP algorithm so it computes
the actual shortest paths, rather than just the distances
• Exercise 3:
a) Could we have used the greedy single-source shortest-
paths algorithm to compute the all-pairs shortest paths?
b) If so, how?
c) What would be the time complexity of that new
algorithm?
CS 6212 Design and Analysis of Algorithms Dynamic Programming
17
3RD APPLICATION OF DYNAMIC PROGRAMMING
-- OPTIMAL BINARY SEARCH TREES --
Problem Statement:
• Input:
• A sorted array 1 < 2 < ⋯ <
• Access probabilities 1,2, … ,, where = Pr[ is searched for]
• Probabilities 0,1 ,2, … , of accessing missing elements, where 0 =Pr searching for , < 1 , = Pr[searching for, < < +1], and =Pr[searching for , > ]
• Output: A binary search tree (BST) for 1 ,2, … , such that the
search time is minimized (depending on the probability of access).
Assume that no insert or delete operations will be performed.
• Task: Develop a Dynamic Programming algorithm for this problem
CS 6212 Design and Analysis of Algorithms Dynamic Programming
18
OPTIMAL BINARY SEARCH TREES
-- AN EXAMPLE --
• An example to get a better sense
• Suppose the 1 ,2 , … , are phone numbers in a telephone directory
• Some numbers belong to celebrities => are searched for more often
• Some numbers belong to less well-known people, and so they’re searched for
less frequently
• The phone company needs a data structure for the phone numbers so that
• “hot” numbers are found fast (e.g., located close to the top of the BST),
• while the less-frequently accessed numbers can take longer to find (i.e., can
be placed in the middle or bottom of the BST)
• The directory is fairly stable: few inserts/deletes, but many searches
• Several data structures can be considered, by BSTs are good candidates
• Question: Why can’t we put all the numbers near the top so all are fast to
find?
CS 6212 Design and Analysis of Algorithms Dynamic Programming
19
OPTIMAL BINARY SEARCH TREES
-- EXAMPLE OF HOW TO GET THE PROBABILITIES --
• The phone company has a log of the number of times each phone
number (real or missing) was searched for in recent months/years
• Say that the company received
• N searched requests during that period
• of those requests were for number
• of those requests were for missing numbers that fall between
and +1.
• Then, = and = for all
CS 6212 Design and Analysis of Algorithms Dynamic Programming
20
OPTIMAL BINARY SEARCH TREES
-- MANY SOLUTIONS EXIST--
• For the same input array 1 < 2 < ⋯ < , there can be many BSTs
• Example: for array 3<5<8<9, we can have
and many more
• Which is best?
CS 6212 Design and Analysis of Algorithms Dynamic Programming
21
8 95
3
8 93
5
5 83
9
OPTIMAL BINARY SEARCH TREES
-- CONCRETIZING THE COST OF A CANDIDATE BST (1) --
• The problem is asking for a best BST, with a vague notion
of “best”: minimizing the search time
• But the search time in a single BST varies: it takes O(d)
where d is the depth (of the search element) in the tree,
and that varies from node to node
• For optimization to work, we need to have a single cost
value per solution (i.e., per candidate BST)
• And the cost should be smaller for BSTs where the “hot”
elements can be found fast (i.e., close to the root)
CS 6212 Design and Analysis of Algorithms Dynamic Programming
22
OPTIMAL BINARY SEARCH TREES
-- CONCRETIZING THE COST OF A CANDIDATE BST (2) --
• Strategy:
• take the cost of a BST = the average search time in the tree
• Since not all the nodes are equally important, take a
weighted average
• The weights are the access probabilities
• First attempt at formulating the cost C(T) of a BST T:
• = = ∑=1 × _()
• = ∑=1 ( + 1)
• But that ignores the searches for missing items
CS 6212 Design and Analysis of Algorithms Dynamic Programming
23
OPTIMAL BINARY SEARCH TREES
-- CONCRETIZING THE COST OF A CANDIDATE BST (3) --
• Some notation to help with access to missing
numbers:
• For every “missing leaf”, create a new
imaginary node, called an external node
(in green squares)
• Labels the external nodes 0, 1, … ,
from left to right
• In an n-node BST, there are n+1 external
nodes
• If < < +1 , search() takes us to
external node
• Search_time()=depth(), and is
“search for” with a prob.ability
CS 6212 Design and Analysis of Algorithms Dynamic Programming 24
3
42
1
15 20104

Search(3) ->
Search(7) ->
Search(17)->
OPTIMAL BINARY SEARCH TREES
-- CONCRETIZING THE COST OF A CANDIDATE BST (4) --
• Final formulation of the cost C(T) of a BST T:
• = ∑=1 × _ + ∑=0 × _
• = ∑= ( + ) + ∑=
CS 6212 Design and Analysis of Algorithms Dynamic Programming
25
OBST USING DYNAMIC PROGRAMMING
-- (1) NOTATION --
• Let
• ≝ +1, … , , that is, is the min-cost BST for
elements +1, … ,
• ≝ = ∑=+ ( + ) + ∑=
• ≝ the index of the root of , i.e., is the root of
• Notice that
• The final OBST (for the whole array) is 0
• is empty for all
• ,+1 is a single-node tree that has element +1
CS 6212 Design and Analysis of Algorithms Dynamic Programming
26
OBST USING DYNAMIC PROGRAMMING
-- (2) PRINCIPLE OF OPTIMALITY --
• Statement of POO: The left subtree of an OBST is an
OBST, and the right subtree of an OBST is an OBST.
• Proof: Take an OBST and let
L and R be its left subtree and
right subtree. Prove that L
and R are both OBST
for their respective
elements.
CS 6212 Design and Analysis of Algorithms Dynamic Programming
27
+1 …−1
for some
+1 …

L R
OBST
-- PROOF OF THE POO (1/2) --
• = ∑=+1 ( + 1) + ∑=
• Break each sum along the two subtrees:
• = ∑=+1−1 ( + 1)+ + 1 +∑=+1 ( + 1)
+∑=
−1 + ∑=

• = ∑=+1−1 ( + 1 + 1) + + ∑=+1 ( + 1 + 1) +
+∑=
−1( + 1) + ∑= ( + 1)
• = + + +1 + ⋯+ + ( + ⋯+ )
CS 6212 Design and Analysis of Algorithms Dynamic Programming
28
() () :
= + +
+1 …−1
for some k
+1 …

L Rx
= + 1
OBST
-- PROOF OF THE POO (2/2) --
• = + +
• If is not optimal, then we can find a better BST tree ′ for the same elements,
i.e., ′ < ()
• Replace by ′ in , resulting a new tree ′ where

′ = ′ + + < + + =
• Thus, ′ < , making ′ better than optimal, contradiction.
• ∴ The subtree L is optimal (and thus deserves the notation ,−)
• Similarly, the subtree R is optimal (and thus deserves the notation )
• Q.E.D. (End of proof of the POO)
CS 6212 Design and Analysis of Algorithms Dynamic
Programming 29
OBST
-- (3) RECURRENCE RELATION --
• The RR was largely derived during the proof of the POO
• = + + , where = ,−1 and =
• Therefore, = ,−1 + +
• Leading to: = ,− + + . But what is ?
• It is a split point ( is at the root), and can be: + 1, + 2, … ,
• We try all possible values for and pick the minimum.
Therefore:
CS 6212 Design and Analysis of Algorithms Dynamic Programming
30
= +≤≤(,− + + ); =
= = +1 + ⋯+ + + ⋯+ ,−1 = +1 + ⋯+ −1 + + ⋯+ −1 = ,− + + ; =
Recall that = , and so
,−1 = ,−1 and =
OBST
-- (4) ALGORITHM FOR THE RR --
CS 6212 Design and Analysis of Algorithms Dynamic Programming
31
// Compute the weights ’s first
Procedure weights (In: [1:], [0:];
Out: [0:, 0:])
Begin
for = 1 to do
[, ] = ();
endfor
for = 1 to do // = − = size of
for = 0 to − do
= + ;
[, ] = [, − 1] + [] + [];
endfor
endfor
end
Proc OBST(In: [1:],[0:],[0:, 0:];
Out: [0:, 0:], [0:, 0:])
begin
for =0 to do:[, ] ∶= 0; endfor
for =1 to do
for =0 to − do
= + ;
[, ] ≔∞;
m := +1; // index of the min
for = + 1 to do //compute the min
if [, ] > [, − 1] + [, ] then
[, ] = [, −1] + [, ] ;
m := ;
endif
endfor
[, ] := [, ] + [, ];
[, ] := m;
endfor
endfor
end
Time: (2)
Reason: the double-for
loop, whose body takes
O(1) time, takes
O( × )O 1 = O n2
Time:
∑=1
∑=0
−1 =
∑=1
=
∑=1
=
+1
2
=
(3)
OBST
-- (4) ALGORITHM (CONTINUED) --
CS 6212 Design and Analysis of Algorithms Dynamic Programming
32
// Compute the tree
Proc create-tree(In: [0:, 0:],[1:], , ;
Out: T)
begin
if ( == ) then
T=null;
return;
endif
T := new(node); // the root of
∶= [, ];
T --> data := [];
if ( == + 1)
return;
endif
create-tree(,, , − 1; T --> left);
create-tree(,, , ; T --> right);
end create-tree
// Overall algorithm
Proc final-tree(In: [1:],[1: ],[1:]; Out: T)
begin
weights([1:],[0:]; [0:, 0: ]);
OBST( 1: , 0: , 0:, 0: ;
[0:, 0: ], [0:, 0:]);
create-tree([0:, 0: ],[1: ], 0, ; T);
End
Time: O(n)
Proof: By induction
Time: 2 + 3 + = 3
Space: 2 for C[ ], W[ ], and r[ ]
OBST DP ALGORITHM
-- AN EXAMPLE (1/3) --
• Input: = 4, 1< 2 < 3 < 4; 1= 110 ,2 = 210 ,3 = 310 ,4 = 110
0= 0,1 = 110 ,2 = 120 ,3 = 120, 4 = 110
• Weights:
CS 6212 Design and Analysis of Algorithms Dynamic Programming
33
00 = 0 11 = 110 22 = 120 33 = 120 44 = 110
01 = 210 12 = 3.510 23 = 410 34 = 2.510
02 = 4.51 0 13 = 710 24 = 610
03 = 810 14 = 910
04 = 1010
=
= ,−1 + +
Table is computed
one row after
another, top to
bottom
OBST DP ALGORITHM
-- AN EXAMPLE (2/3) --
• The ’s and ’s:
CS 6212 Design and Analysis of Algorithms Dynamic Programming
34
00 = 0 11 = 0 22 = 0 33 = 0 44 = 0
01 = 210
01 = 1 12 = 3.51012 = 2 23 = 41023 = 3 34 = 2.51034 = 4
02 = 6.510
02 = 2 13 = 10.51013 = 3 24 = 8.51024 = 3
03 = 1410
03 = 2 14 = 151014 = 3
04 = 1910
04 = 3
= 0
= min+1≤≤ ,−1 + +
= the that gives the minimum
Table is computed
one row after
another, top to
bottom
13 = min2≤≤3 1,−1 + 3 + 13 = min 11 + 23 + 710 ,12 + 33 + 710 = min(0 + 410 + 710 , 3.510 + 0 + 710)
13 = min 1110 , 10.510 = 10.510 corresponding to = 3, thus 13 = 3
= 1
= 3
Example of computing the C’s
after top 2 rows have been computed:
OBST DP ALGORITHM
-- AN EXAMPLE (3/3) --
• Constructing the actual OBST from the ’s in the table:
• T=04 has root 04 = 3
• ∴ left subtree has 1,2 , i.e., 02
• and right subtree has a the single node 4
• Let’s construct 02
• 02 has root 02 = 2
• 02 has a left subtree 01: a single node 1
• 02 has a right subtree 22, which is empty
• The tree is done
CS 6212 Design and Analysis of Algorithms Dynamic Programming
35
3
02
4
3
42
1
EXERCISES
• Exercise 4: If all the ’s are equal to 0, and all the ’s are all
equal (to 1/n), prove that the cost C() of any binary search
tree is equal to (the average depth of T)+1, where
(the average depth of ) = 1

( 1 + 2 + ⋯+ )
• Exercise 5: Give a greedy algorithm for matrix chain problem,
and prove/disprove that your greedy method guarantees or
does not guarantee optimality
• Exercise 6: Give a greedy algorithm for OBST problem, and
prove/disprove that your greedy method guarantees or does
not guarantee optimality
CS 6212 Design and Analysis of Algorithms Dynamic Programming
36
WRAP-UP
• Done with Dynamic Programming
• Let’s review the lessons learned
• And point to other applications of DP
CS 6212 Design and Analysis of Algorithms Dynamic Programming
37
LESSONS LEARNED SO FAR
• DP is an optimization design technique
• It has a litmus test (the principle of optimality): DP applies if OOP holds
• Proof of the POO is almost always by contradiction
• Solving the recurrence relation is non-recursive, bottom up, from smallest subsolutions
to the final solution, where the subsolutions are usually recorded by filling a table
• The actual optimal solution is constructed by using the optimal split points recorded in
the table
• Time complexity of DP algorithms tend to be cubic (i.e., (3)), so they’re slower than
greedy algorithms, but still reasonably fast
• DP is more powerful than greedy: in many situations where greedy solutions are not
optimal, DP applies and gives optimal solutions
• Still, DP does not always apply: when POO fails, don’t bother with DP
CS 6212 Design and Analysis of Algorithms Dynamic Programming
38
OTHER APPLICATIONS OF DYNAMIC
PROGRAMMING
• Hidden Markov Models (used in machine learning, statistics, Natural Language
Processing, Bioinformatics, etc.)
• Viterbi algorithm (in communications/networking, e.g., modulation/demodulation)
• Sequence alignment in genetics (Needleman–Wunschalgorithm)
• Longest common subsequence
• DP is used in Reinforcement Learning (click here)
• Scheduling (and time sharing)
• Query optimization in databases
• Robot control
• Flight control
• Many more
CS 6212 Design and Analysis of Algorithms Dynamic Programming
39































































































































































































































































































































































































































































































































































































































































































学霸联盟


essay、essay代写