CS6140: Machine Learning
Homework Assignment # 4
Assigned: 03/26/2021 Due: 04/12/2021, 11:59pm, through Canvas
Six problems, 170 points in total. Good luck!
Prof. Predrag Radivojac, Northeastern University
Problem 1. (15 points) Consider a logistic regression problem where X = Rd and Y = {−1,+1}. Derive
the weight update rule that maximizes the conditional likelihood assuming that a data set D = {(xi, yi)}ni=1
is given.
Problem 2. (20 points) Consider a logistic regression problem with its initial solution obtained through the
OLS regression; i.e., w(0) = (XTX)−1XTy , in the context of the code provided in class (week 6). Recall
that x was drawn from a mixture of two Gaussian distributions with dim{x} = 2 (before adding a column
of ones) and that y ∈ {0, 1}. You probably noticed that the initial separation line is consistently closer to
the data points of class 0.
a) (10 points) Why was this the case? Draw a picture (if possible) to support your argument.
b) (5 points) Devise a better initial solution by modifying the standard formula w(0) = (XTX)−1XTy.
c) (5 points) Now again consider the case where y ∈ {−1,+1}. What is the form of the modified solution
from part (b) in this case?
Problem 3. (40 points) Consider two classification concepts given in Figure 1, where x ∈ X = [−6, 6] ×
[−4, 4], y ∈ Y = {−1,+1} and p(yx) ∈ {0, 1} is defined in the drawing.
6
A) B)
6
4 4
4 4
0 0
0 06 6
( 2, 1) 
(2, 1)
+
+
+




( 4, 3)
( 1, 2) 
(2, 0)
+
+
+

( 4, 3)
Figure 1: Two concepts where examples that fall within any of the three 3 × 3 (panel A) or 1 × 1 (panel
B) squares are labeled positive and the remaining examples (outside each of the squares but within X ) are
labeled negative. The position of the point x = (x1, x2) in the upper lefthand corner for each square is
shown in the picture. Consider horizontal axis to be x1 and vertical axis as x2.
1
2 Homework Assignment # 4
Your experiments in this question will rely on generating a data set of size n ∈ {250, 1000, 10000} drawn
from a uniform distribution in X and labeled according to the rules from Figure 1; e.g., P (Y = 1x) = 1 if x
that was randomly drawn is inside any of the three squares in either of the two panels, and P (Y = 1x) = 0
otherwise. The goal of the following two problems will be to train and evaluate classifiers created from the
data generated in this way. You can use any library you want in this assignment and do programming in
Python, MATLAB, R or C/C++. Your code should be easy to run for each question and subquestion below
so that we can replicate your results to the maximum extent possible.
Consider singleoutput feedforward neural networks with one or two hidden layers such that the number of
hidden neurons in each layer is h1 ∈ {1, 4, 12} and h2 ∈ {0, 3}, respectively, with h2 = 0 meaning that there
is no second hidden layer. Consider one of the standard objective functions as your optimization criterion
and use early stopping and regularization as needed. Consider a hyperbolic tangent activation function in
each neuron and the output but you are free to experiment with others if you’d like to. For each of the
architectures, defined by a parameter combination (h1, h2), evaluate the performance of each model using
classification accuracy, balanced accuracy, and area under the ROC curve as your two performance criteria.
To evaluate the performance of your models use crossvalidation. However, to evaluate the performance of
performance evaluation, generate another very large data set on a fine grid in X . Then use the predictions
from your trained model on all these points to determine the “true” performance. You can threshold your
predictions in the middle of your prediction range (i.e., at 0.5 if you are predicting between 0 and 1) to
determine binary predictions of your models and to then compare those with true class labels you generated
on the fine grid.
Provide meaningful comments about all aspects of this exercise (performance results for different network
architectures, accuracy of crossvalidation, run time, etc.). The comments should not just restate the results
but rather capture trends and give reasoning as to why certain behavior was observed.
Problem 4. (70 points) Matrix factorization with applications. Consider an n× d realvalued data matrix
X = (xT1 ,x
T
2 , . . . ,x
T
n ). We will attempt to approximate this matrix using the following factorization
Xˆ = UVT
where U = (uT1 ,u
T
2 , . . . ,u
T
n ) is an n × k and V = (vT1 ,vT2 , . . . ,vTd ) is a d × k matrix of real numbers,
and where k < n, d is a parameter to be explored and determined. Notice that the value xij in X can be
approximated by uTi vj and that x
T
i , the ith row of X, can be approximated by u
T
i V
T , giving xˆi = Vui.
We will formulate the matrix factorization process as the following minimization
min
U,V
∑
i,j
(xij − uTi vj)2 + λ(
∑
i
ui2 +
∑
j
vj 2)
which minimizes the sumofsquarederrors between real values xij and reconstructed values xˆij = u
T
i vj .
The regularization parameter λ ≥ 0 is userselected. This problem can be directly solved using gradient
descent, but we will attempt a slightly different approach. To do this, we can see that for a fixed V we can
find optimal vectors ui by minimizing
Vui − xi2 + λ · ui2
which can be solved in a closed form using OLS regression for every i. We can equivalently express the
optimization for vectors vj and find the solution for every j. Then, we can alternate these steps until
convergence. This procedure is called the Alternating Least Squares (ALS) algorithm for matrix factorization.
It has the following steps:
Initialize U and V
repeat
for i = 1 to n
Homework Assignment # 4 3
ui = formula #1
end
for j = 1 to d
vj = formula #2
end
until convergence
where you are expected to derive formula #1 and formula #2.
a) (10 points) Derive the optimization steps for the ALS algorithm by finding formula #1 and formula
#2 in the pseudocode listed above.
b) (20 points) Consider now that some values in X are missing (e.g., the rows of X are users and the
columns of X are movie ratings, when available) and that we are interested in carrying out matrix
completion using matrix factorization presented above. We would like to use the ALS algorithm, but
the problem is that we must exclude all missing values from optimization. Derive now a modified ALS
algorithm (formulas #1 and #2) to adapt it for matrix completion. Hint: consider adding an indicator
matrix W to the optimization process, where wij = 1 if xij is available and wij = 0 otherwise.
c) (20 points) Consider now a MovieLens database available at
http://grouplens.org/datasets/movielens/
and find a data set most appropriate to evaluate your algorithm from the previous step; e.g., one of
the 100k data sets. Now, implement the ALS algorithm on your data set and evaluate it using the
meansquarederror as the criterion of success. You can randomly remove 25% of the ratings, train
a recommendation system, and then test it on the test set. You will have to make certain decisions
yourselves, such as initialization of U and V, convergence criterion, or picking k and λ.
d) (10 points) Describe your full experimentation process (e.g., how did you vary k) and observations
from (c). Additionally, can you provide some reasoning as to what k is and what matrices U and V
are?
e) (10 points) Compare your method against the baseline that fills in every missing movie rating value
xij as an average over all users who have rated the movie j. Discuss your empirical findings.
Problem 5. (15 points) Prove representational equivalence of a threelayer neural network with linear
activation function in all neurons and a singlelayer layer neural network with the same activation function.
Assume a singleoutput network.
Problem 6. (10 points) Let A, B, C, and D be binary input variables (features). Give decision trees to
represent the following Boolean functions:
a) (3 points) A ∧ B¯
b) (3 points) A ∨ (B¯ ∧ C)
c) (4 points) A⊕ B¯
where A¯ is the negation of A and ⊕ is an exclusive OR operation.
4 Homework Assignment # 4
Directions and Policies
Submit a single package containing all answers, results and code. Your submission package should be
compressed and named firstnamelastname.zip (e.g., predragradivojac.zip). In your package there should
be a single pdf file named main.pdf that will contain answers to all questions, all figures, and all relevant
results. Your solutions and answers must be typed1 and make sure that you type your name and Northeastern
username (email) on top of the first page of the main.pdf file. The rest of the package should contain all
code that you used. The code should be properly organized in folders and subfolders, one for each question
or problem. All code, if applicable, should be turned in when you submit your assignment as it may be
necessary to demo your programs to the teaching assistants. Use Matlab, Python, R, Java, or C/C++.
However, you are encouraged to use languages with good machine learning libraries (e.g., Matlab, Python,
R), which may be handy in future assignments.
Unless there are legitimate circumstances, late assignments will be accepted up to 5 days after the due date
and graded using the following rules:
on time: your score × 1
1 day late: your score × 0.9
2 days late: your score × 0.7
3 days late: your score × 0.5
4 days late: your score × 0.3
5 days late: your score × 0.1
For example, this means that if you submit 3 days late and get 80 points for your answers, your total number
of points will be 80 × 0.5 = 40 points.
All assignments are individual, except when collaboration is explicitly allowed. All text must be be your
own or, for group assignments, by the members of the group. All sources used for problem
solution must be acknowledged; e.g., web sites, books, research papers, personal communication with
people, etc. Academic honesty is taken seriously! For detailed information see Office of Student Conduct
and Conflict Resolution.
1We recommend Latex; in particular, TexShopMacTeX combination for a Mac and TeXnicCenterMiKTex combination
on Windows. An easy way to start with Latex is to use the freely available Lyx. You can also use Microsoft Word or other
programs that can display formulas professionally.
学霸联盟