程序代写案例-CMPT-310
时间:2021-08-02
Simon Fraser University
Midterm Exam
CMPT-310
Summer 2021
Due: Aug 1, 2021 11:59pm
This exam includes 13 questions. The total number of points is 110.
Question: 1 2 3 4 5 6 7
Points: 10 10 10 10 10 10 20
Score:
Question: 8 9 10 11 12 13 Total
Points: 5 5 5 5 5 5 110
Score:
1
CMPT-310 Midterm Exam: Page 2 of 7 July 25, 2021
This Midterm Exam is to be completed as individual work. No collaboration or discussion
with others of the Midterm Exam is allowed.
Part I
Algorithms
The following questions require written answers (full sentences) clearly explaining your thinking
process. You are free to supplement your answers with graphs, images, and mathematics. Code
is not required as part of your submission (unless specified).
CSP
Question 1 (10 points)
Write a CSP that describes how you would currently get to SFU’s Burnaby campus
for a weekday class that begins at 9:00 am. Assume your total travel time must be less
than 90 minutes and your starting address is:
100 3rd St W
North Vancouver, BC
V7M 2G3
Include the following information:
• what are the variables?
• what are the domains?
• what are the constraints?
• what is the number of possible worlds?
Decision Tree
Question 2 (10 points)
Create a decision tree to predict the number of wildfires that could start each day given
the following variables:
• the day of the year (a value between 1 and 365, e.g., Jan 1 would be 1, Dec 31 would
be 365)
• temperature (Celsius)
• ignition, spread, & burning intensity of forest fuels (a value from 0.0 to 100.0)
State the values used for the following:
CMPT-310 Midterm Exam: Page 3 of 7 July 25, 2021
Day of Year Temp. (Celsius) Forest Fuels: Ignition... Num. Fires Started
1 4.1 5.1 0
2 3.1 1.1 0
3 5.0 0.2 0
4 11 6.9 0
5 15.1 8.1 0
6 24.1 5.1 0
7 5.0 7.2 0
8 21 66.9 0
9 25.1 61.1 1
10 34.1 75.1 0
11 5.0 77.2 2
12 24 66.9 3
13 21.1 81.1 8
14 37.1 85.1 11
Figure 1: Number of new wildfires started each day.
• stopping criterion
• splitting function
Classification
Question 3 (10 points)
Assume the following likelihoods from Figure 2 for each word being part of a positive or
negative movie review, and equal prior probabilities for each class.
Word Positive Negative
I 0.09 0.16
always 0.07 0.06
like 0.29 0.06
foreign 0.04 0.15
films 0.08 0.11
Figure 2: Likelihoods of various words.
What class will Naive Bayes assign to the following sentence?
I always like foreign films.
Show all calculations used.
CMPT-310 Midterm Exam: Page 4 of 7 July 25, 2021
Naive Bayes Classifier
Question 4 (10 points)
The following movie reviews are each labeled with a genre (comedy or horror):
Movie Review Words In Review Genre
Doc1 fun, couple, love, love comedy
Doc2 fast, furious, shoot horror
Doc3 couple, fly, fast, fun, fun comedy
Doc4 furious, shoot, shoot, fun horror
Doc5 fly, fast, shoot, love horror
Docunknown fast, couple, shoot, fly ???
Figure 3: Movie reviews, important words in review, genre of movie.
Compute the most likely class for Docunknown. Assume a naive Bayes classifier and use
add-1 smoothing for the likelihoods. Show all calculations used.
Comparing Two Linear Regression Models
Question 5 (10 points)
Create a linear regression model with the following data.
Age Height (inches) Annual Income
20 64 50,000
18 60 38,000
25 58 25,000
28 63 10,000
33 66 55,000
40 54 50,000
45 61 250,000
Figure 4: Annual income in CAD$. Height is in inches and Age is in years.
Using the exact same data, create another linear regression model but represents a person’s
height in feet instead of inches. For example, 6 feet 3 inches would be 6 feet 3/12, which
becomes 6.25 feet.
CMPT-310 Midterm Exam: Page 5 of 7 July 25, 2021
Compare the performance of the two linear regression models.
Include the weights learned by each model.
Neural Network: Input Units
Question 6 (10 points)
On the MNIST digit dataset, a neural network with 40 input units was used. After
cross-validation, the number of input units actually used was reduced to 30. Discuss the
possible reasons why the number of input units employed by the neural network were
reduced.
Trigram Language Model
Question 7 (20 points)
Create a trigram language model trained on “Monty Python and the Holy Grail”, which
can be downloaded via http://www.nltk.org/data.html. Use backoff for smoothing the
data.
Compute the probability of the language model generating the following two sentences:
• Bring out your dead!
• Bring out your living!
Show all values, formulae, and calculations used to produce the answers.
CMPT-310 Midterm Exam: Page 6 of 7 July 25, 2021
Part II
Short Answer Questions
The following questions require written answers using full sentences (no more than 1-2
paragraphs) clearly explaining your thinking process. You may supplement your answers
with graphs, images, and math.
Partially Complete
Question 8 (5 points)
Discuss the difference (if any) between partial observability and partial sensory information.
Provide examples to illustrate your explanation.
Higher-Order Models
Question 9 (5 points)
Discuss why a higher-order function results in a more complex representation com-
pared to a lower-order function. Use real-world examples to illustrate your explanation.
Types Of Errors
Question 10 (5 points)
Explain the Types Of Errors using the example of investing in financial stocks. Interpret
what a type-I error and a type-II error correspond to with respect to investing in
financial stocks.
One-hot Encoding
Question 11 (5 points)
In one-hot encoding, when we convert a binary text feature (e.g., “True” or “False”) into
CMPT-310 Midterm Exam: Page 7 of 7 July 25, 2021
integer features why can we drop one of the labels (e.g., “False” is dropped while “True”
is preserved as a feature)?
Data: Accuracy
Question 12 (5 points)
Dicuss the issues with using accuracy as a performance metric with an imbalanced dataset.
For example, in identifying whether an artwork is a masterpiece, a typical dataset could
contain 10 artworks labeled as “masterpiece” and 9,990 artworks labeled as “not a mas-
terpiece”.
Data: Imbalance
Question 13 (5 points)
Discuss data imbalance in a binary classification task (i.e., output is either True or False).
Use the example of two agents (Agent1 & Agent2) programmed with the following be-
haviour:
• Agent1 always predicts True
• Agent2 always predicts False

































































































































































学霸联盟


essay、essay代写