机器学习代写-CSE 142
时间:2021-11-02

First Name: Last Name:
Account: @ucsc.edu
CSE 142 Final Exam, Fall 2020 — Sample
VERY IMPORTANT. PLEASE READ CAREFULLY.
This is a 4 hour take-home exam which may be started after Dec 11 at 8am PST and
must be submitted by Dec 12 at 11:59am PST.
Organize your solutions in a clear and recognizable way, either typesetting them with
LATEXor writing them on paper and then scanning or photographing them. Ensure the
details of your answers can be read easily.
Submit your solution as a PDF file or JPG(s) to Canvas.
Multi-part questions generally have their points split evenly over the various parts. Stan-
dard test-taking advice applies: read all of the questions carefully, and skip over problems
that are difficult for you and/or are taking too much time.
This exam is open-note. You may refer to your assignments, our slides, lecture videos,
or any other materials distributed as part of this course. You may NOT use materials from
outside of the class. You will finish the exam by yourself, and you are NOT allowed to receive
help from anyone.
IMPORTANT: On the first page of your submission, before your answers to the questions,
include a pledge that you have done the exam independently and have not received help
from others, nor the Internet.
If you have any questions, please email me at yangliu@ucsc.edu, and cc your TAs (Reilly
Raab: rraab@ucsc.edu, Tianyi Luo: tluo6@ucsc.edu).
Good Luck
0
Grades breakdown
problem points score
1 12
2 15
3 12
4 12
5 10
6 15
7 10
Final score 100
1
1. (12 pts) Basic concepts short answer (1-2 sentences each).
(a) (4 pts) Describe how least-squares linear regression is a Maximum Likelihood
method.
(b) (4 pts) What is the purpose of regularization?
2
2. (15 pts) Medium answer.
(a) (2 pts) What is the difference between regression and classification?
(b) (2 pts) Why might k-Nearest Neighbors work better than simple Nearest neighbor
in the presence of noise?
3
3. (12 pts) How do we infer the bias of a coin from a sample of flips? Say (HHHTT).
(a) (6 pts) Please refer to the example in our lecture.
(b) (6 pts) Q2 of Quiz 1.
4. (12 pts) Naive Bayes
(a) (4 pt) Example in our lecture slides page 3.
(b) (5 pts) Q3 of Quiz 2.
4
5. (15 pts) Support Vector Machines.
Be familiar with SVM. Dual SVM - why do we need dual SVM? What are the properties
of dual SVM?
(a) (3 pt) Q2 of Quiz 3.
5
6. (10 pts) Perceptron Algorithm:
(a) (2 pts) Basic property of Perceptron.
(b) (8 pts) Learn to apply perceptron. Q1 of Quiz 3.
6
7. (8 pts) Expectation-Maximization
(a) (4 pts) Describe Expectation-Maximization method
(b) (4 pts) Describe EM for a probabilistic k-means (k = 2 with Gaussian mixture
assumption). Hint: Gaussian distributions are determined by (µ, σ2). You need
to show the update of these parameters (if exists) - how do you estimate Gaussian
parameters?


essay、essay代写