xuebaunion@vip.163.com
3551 Trousdale Rkwy, University Park, Los Angeles, CA
留学生论文指导和课程辅导
无忧GPA:https://www.essaygpa.com
工作时间:全年无休-早上8点到凌晨3点

微信客服:xiaoxionga100

微信客服:ITCS521
ECE 5759 Assignment 2 Due: September 24, 11:59 PM Eastern time
Instructions You have to submit all MATLAB codes at CarmenCanvas website
along with your homework. To submit MATLAB codes (and your assignment
if you want to submit electronically), please go to
Assignments/Assignment 2 and upload as many MATLAB code files and pdfs
as you like. Please make sure you name your MATLAB code according to the
problem number. For example, use LastName_2c.m as the filename for the
code that you submit for problem 2(c). Problem 1. Futility of Necessary
Conditions (20 marks) Consider the unconstrained optimization problem
minx∈R x3/3. Part (a) Show that x? = 0 satisfies the first and second
order necessary conditions for optimality. Part (b) Next, pick your
favorite x0 ∈ (0, 1) and use steepest descent method with αk = 1 for all
k. Plot the semilogy curve for k vs xk in MATLAB for 100 iterations.
Does the sequence {xk} converge? Why (present a mathematical proof of
your answer)? No need to turn in your MATLAB code. Part (c) Suppose now
you are solving minx∈Rn f(x). You used your favorite algorithm to
generate a sequence {xk} that converges to x∞ (just like in the part b
above). How would you know if x∞ is indeed a local minimum? Give two
conditions to ascertain x∞ is a global minimum? (one of the conditions
is obvious, the other condition is rather tricky). Problem 2. Divergence
with Diminishing Stepsize (35 marks) The purpose of this problem is to
learn that if you choose the initial point “somewhat far” from the
optimal point, then the gradient methods may diverge. Consider the
one-dimensional function f(x) = 2 3 |x|3 + 1 2 x2. Since f(x) ≥ 0 and
f(0) = 0, we immediately have x? = 0. Pick αk = γk+1 , where γ is a
positive scalar. We will use in the steepest descent method. Part (a)
Plot the function in MATLAB for x ranging from -5 to 5. Also compute the
second derivative of the function ∇2f for x ∈ R \ {0} and show that it
is unbounded? Is the function f convex? Part (b) Pick γ = 1 and compute
the expression for xk+1 in terms of xk. Show (without using MATLAB) that
if γ = 1 and |x0| ≥ 1, then |xk| ≥ k + 1 for all k ∈ N. Thus, the
steepest descent method diverges if γ = 1 and |x0| ≥ 1. Hint: Use the
principle of mathematical induction. Part (c) Use trial and error to
identify a pair (γ, x0) for which the steepest descent method converges.
You can use MATLAB to identify a pair1. Show the first six points x0,
x1, . . . , x5 on the plot using the MATLAB command
text(x,y,’x_1’,’HorizontalAlignment’,’left’); or
text(x,y,’x_2’,’HorizontalAlignment’,’right’); The above command puts
the text x1 at the position (x, y) on the MATLAB. The steepest descent
does not converge in this problem because the second derivative of the
function is unbounded! 1I expect everyone in the class will have a
different answer from that of others for this question. Page 1 of 3 ECE
5759 Assignment 2 Due: September 24, 11:59 PM Eastern time Part (d) Pick
σ = 0.001, s = 1, β = 0.2, and some x0 such that |x0| ≥ 1. Use Armijo
rule and see if the iterations converge. Recall that Armijo rule sets αk
= sβ mk , where mk is the smallest non-negative integer 2 m for which
the following inequality is satisfied: f(xk)− f(xk + sβmdk) ≥
−σsβm∇f(xk)T dk. Since this is the steepest descent algorithm, dk =
−∇f(xk). Note: If the method does not converge, then try changing the
value of β and σ. To ensure convergence, the values of β and σ will
depend on the initial point x0 you choose. In fact, for fun (and if you
don’t have anything better to do), you can play with various values of
(x0, σ, β). Problem 3. Newton’s Method (25 marks) In this problem, we
will code Newton’s method on a quartic optimization problem. Define Q =
2 −1 0−1 2 −1 0 −1 2 , b = 102 −20 . In Assignment 1, you
showed that the matrix Q is positive definite. Consider the function f :
R3 → R as f(x) = 1 2 xTQx+ bTx+ 0.5x41 + 0.2x 4 3. We now consider the
minimization problem minx∈R3 f(x). Part (a) What is ∇f(x) and ∇2f(x)? Is
f a convex function? Part (b) Pick your favorite x0. Write a MATLAB
code using Newton’s method with stepsize determined by Armijo rule with σ
= 0.05, s = 1, β = 0.5. Does it converge to the optimal solution? Part
(c) What is the value of x? you computed in part (b)? Define the error
as ek = ‖xk − x?‖ and define zk = ek+1/ek. Start with your favorite x0
and plot k vs zk graph for k = 1, . . . , 20 for Newton’s method on the
same plot. What do you notice in the plot? (No need to turn in the code
for this problem, since you are submitting the code for Part b) Problem
4. Conjugate Direction Method (20 marks) Generate a positive definite
matrix by using the following MATLAB code: function Q = posdef(n) A =
randn(n,n); [U,V] = eig(A+A’); Q = U’*diag(5*rand(n,1))*U; end The above
function inputs a matrix size n and returns a positive definite matrix
Q. Let n = 1000. Generate b using randn(n,1). Part (a) Pick your n
favorite linearly independent vectors (you may use rand(n, 1) n times or
use unit vectors along every axis). Identify the Q-conjugate vectors
using the linearly independent vectors you chose (feel free to use the
result from Problem 2 in Assignment 1 and MATLAB). Submit the code. 2mk
can be equal to 0. Page 2 of 3 ECE 5759 Assignment 2 Due: September 24,
11:59 PM Eastern time Part (b) Use the n conjugate vectors computed
above to solve the following optimization problem min x∈Rn 1 2 xTQx+
bTx. in MATLAB using conjugate direction method (recall that αk must
solve the line minimization problem at every k). Page 3 of 3