MACM 409/MATH 709 – Assignment 6
Notice
All work you hand in must be your own effort. Cite any sources (books, lecture
notes, websites) and acknowledge anyone you discuss the questions with (this
includes students/TAs/instructors). Your work should submitted via Crowdmark –
see Canvas for the due date. Make sure your answers are clear and neatly written.
This homework contains both mathematical questions and a computational question. It
also contains additional questions that students in MATH 709 must complete. If you a
student in MACM 409 you are welcome to complete these questions as well for bonus credit.
Mathematical questions for all students
1. Exercise 28.1 from the textbook
2. Exercise 28.2, parts (a)–(b) (for part (b) you should assume that A is self adjoint)
3. Exercise 35.5
Computational questions for all students
4. You have seen in class that the power method converges to the largest eigenvalue of a
given symmetric matrix A ∈ Rm×m. However, suppose that you do not know all the entries
of A. This is often the case in the real world. For example, A could represent the matrix of
a recommender system (think Netflix, Amazon, etc), since no user typically gives the rating
for all the objects in the system (e.g. movies, books, etc). It is highly desirable to compute
the largest eigenvalues and eigenvectors of such matrices, since doing so allows companies to
make accurate recommendations to its users about what movies to watch or what products
to buy.
Perhaps surprisingly, a simple modification of the power method – known as the noisy
power method – can be used to approximately compute the leading eigenvalue and eigen-
vector, even when many of the entries of A are missing. The pseudocode is as follows:
Initialize an arbitrary x(0) ∈ Rm, ‖x(0)‖2 = 1, and set λ(0) = 0
For k = 1, 2, . . . , kmax
1
• Compute v = Aˆx(k−1) + (σ/√m)w, where w ∈ Rm is unit normal random vector
• Compute x(k) = v/‖v‖2
• Compute λ(k) = (x(k))>Ax(k)
end
In this code, σ > 0 is a parameter (the noise magnitude), and Aˆ ∈ Rm×m is the matrix
known to you. That is, it is the matrix obtained from A by setting all its unknown entries
to zero. Note that the random vector w should be redrawn in each step (do not use the
same w for all steps).
Your goal in this question is to study how the parameter p (the fraction of entries of A
you are given) and the noise magnitude σ affect the approximation of the leading eigenvalue
λ. Download the script partialmatrix.m from Crowdmark. This script contains:
1. Code to generate the matrix A ∈ R200×200. Note that the largest eigenvalue of A is
λ = 1.
2. Code to compute the matrix Aˆ by randomly choosing a fraction p of the entries of A.
I recommend you use the follows values for p and σ:
1 pvalues = 1/20:1/20:1;
2 sigmavalues = 1.25.ˆ(-(1:20));
(i) For each value of (p, σ) compute the error err = |1−λ(kmax)|, averaged over a 50 trials,
i.e. 50 random draws of the matrix Aˆ. Plot the results using the following commands:
1 colormap parula;
2 pcolor(pvalues,-log10(sigmavalues),log10(err))
Comment on your results, making sure you (a) explain what the above figure plots and (b)
discuss the relationship between p and σ you observe.
(ii) If p = 1/5, i.e. 80% of the entries of A are missing, what is the approximate range of
values of σ that ensure a reasonable approximation of the leading eigenvalue?
(iii) How does the distribution of the eigenvalues of A affect the performance of this method?
Modify the script partialmatrix.m to consider a series of different distributions for the
eigenvalues, present the corresponding results and discuss your conclusions. (Hint: keep in
mind that the convergence of eigenvalue algorithms (as we have seen in the class) is typically
affected by the ratio of the eigenvalues.)
Questions for MATH 709 students only
None
2
