程序代写案例-ECE4179/5179
时间:2021-11-04
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
Instructions.
• Please write up your solutions neatly! In particular, use notation and terminology correctly and explain
what you are trying to do.
• Please provide details of your computations. No marks will be given if you just provide the answer
without showing how you have obtained it.
• Partial marks will be given for showing that you know some aspects of the answer, even if your solution
is incomplete.
• The questions are NOT arranged in order of difficulty, so you should attempt every question.
• You are not allowed to use any programming language/package (e.g ., Python, PyTorch, TensorFlow,
Keras, NumPy, Matlab) to compute the answers. No mark will be given if the answers are obtained
using a software.
Good Luck
What is this page for?
In the final exam, you will see an instruction sheet similar to the above. Make sure you comply with
the instructions.
Please go on to the next page. . . Page 1 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
1. Explain the functionality of the “transposed convolution” layer and provide an example.
Please go on to the next page. . . Page 2 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
2. Consider a Generative Adversarial Network (GAN). Answer the following questions;
(a) Using a diagram, explain the components of a GAN and their functionalities.
(b) Explain how training is done for a GAN.
Please go on to the next page. . . Page 3 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
3. Consider the Binary Cross Entropy Loss depicted below. Here, y and yˆ are the ground-truth label and
predicted label, respectively. Answer the following questions,
BCE(y, yˆ) = −y log(yˆ)− (1− y) log(1− yˆ)
(a) For what type of problems do you use the BCE loss?
(b) What exactly this loss is trying to achieve? Explain its functionality
Please go on to the next page. . . Page 4 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
4. Give an example of a data augmentation technique that would be useful for classifying images of birds,
but not for classifying handwritten digits. Assume your bird images are color images while your digits
are black and white (similar to the MNIST dataset). Explain your answer.
5.
Please go on to the next page. . . Page 5 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
Recall that a Perceptron realizes the mapping f : Rn → {−1, 1} through f(x) = sign(w>x + b). Here
x ∈ Rn is the input to the Perceptron, w ∈ Rn and b ∈ R are the weights and bias of the Perceptron,
respectively. We want to design a network, purely based on Perceptrons, to realize any Boolean function
over three variables. In other words, the input to our design is a 3 dimensional vector of Boolean values
(0 and 1). The output of your design must match the arbitrary Boolean function (we interpret -1 as 0
to match the Boolean values). Explain if this is at all possible and if yes, how. If not, provide a counter
example.
Please go on to the next page. . . Page 6 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
6. (a) What is the Receptive Field (RF) of a unit in a convolutional neural network?
(b) What is the the RF of an average pooling layer of size 5× 5 with stride 1?
Please go on to the next page. . . Page 7 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
7. Consider the following two models.
model A. Rn 3 x→ Linear(n,100) → Linear(100,8) → Linear(8,16) → softmax(16)
model B. Rn 3 x→ Linear(n,100) → Linear(100,16) → softmax(16)
(a) Is there any flaw in the design of these two models?
(b) What is the advantage of model A over model B?
(c) What is the advantage of model B over model A?
Please go on to the next page. . . Page 8 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
8. For classification, a neural network is usually equipped with a softmax layer.
(a) What is a softmax layer? Be specific about its functionality, properties and even its name.
(b) Where do you place a softmax layer?
(c) How do you train a network with a softmax layer? Be specific about the loss used in conjunction
with the softmax layer.
(d) You have seen in a blog-post that the softmax is invariant under translation by the same value in
each element of its input. Do you agree with this statement? Explain your answer.
Please go on to the next page. . . Page 9 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
9. Let x = (0, 1,−3, 0, 0, 2,−1, 1)> be an 8-dimensional vector.
(a) what is ‖x‖2? Here, ‖ · ‖p denotes the norm p of a vector.
(b) what is ‖x‖0?
(c) what is ‖x‖1?
(d) what is ‖x‖∞?
Please go on to the next page. . . Page 10 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
10. A model is reversible if you can obtain its input directly from its output. Assume x1,x2,y1,y2 ∈ Rn.
Let denote the element-wise multiplication between two vectors (Hadamard product) and f(·), g(·),
and exp(·) act on elements of a vector separately. Consider the following architecture;
y1 = x1
y2 = x2 exp
(
f(x1)
)
+ g(x1)
Show that this design is reversible. That is, you can obtain x1 and x2 from y1 and y2 directly.
Please go on to the next page. . . Page 11 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
11. Consider the following neural network. The circled nodes denote variables. That is, x1 and x2 are the
input variables, and z is the output variable. For your convenience, the hidden variables is denoted by
h0. Rectangular nodes denote functions. More specifically, ⊕ takes the sum of its inputs, and σ is the
Sigmoid function. Suppose
w1 = 2
w2 = −1
and
x1 = 3
x2 = 4
Use the backpropagation algorithm to compute the partial derivative ∂z/∂w1.
Figure 1: The neural network in Q11
Please go on to the next page. . . Page 12 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
12. Consider the input feature map (1 × 3 × 3) and the convolutional kernel (1 × 2 × 2) given in Fig. 2(a)
and Fig. 2(b), respectively, and answer the following questions.
(a) With padding = 0 and stride =1, what is the size of the output feature map if we apply the kernel
in Fig. 2(b) to the feature map in Fig. 2(a)?
(b) With padding = 0 and stride =2, what is the size of the output feature map if we apply the kernel
in Fig. 2(b) to the feature map in Fig. 2(a)?
(c) With padding = 1 and stride =2, what is the size of the output feature map if we apply the kernel
in Fig. 2(b) to the feature map in Fig. 2(a)?
(d) With padding = 0 and stride =1, obtain the output feature map if we apply the kernel in Fig. 2(b)
to the feature map in Fig. 2(a)?
(e) With padding = 0 and stride =1, obtain the output feature map if we apply the kernel in Fig. 2(b)
to the feature map in Fig. 2(a)?
-2 3 1
1 3 4
7 5 4
-4 -2
2 1
(a) (b)
Figure 2: The neural network in Q12
Please go on to the next page. . . Page 13 of 14
ECE4179/5179 Neural Networks and Deep Learning Practice Exam
13. Consider the convolutional kernel (1× 3× 3) given in Fig. 3. The kernel in Fig. 3 has nine parameters
to perform convolution. Your friend claims that they can obtain the same functionality with only six
parameters. Do you agree with them? If yes, why? Be explicit about your answer.
-2 3 1
1 3 4
7 5 4
-4 -2
2 1
(a) (b)
6 0 2
-3 0 -1
3 0 1
Figure 3: The neural network in Q13
End of exam Page 14 of 14








































































































































学霸联盟


essay、essay代写