SCC462 – Distributed Artificial Intelligence
Leandro Soriano Marcolino
Midterm Assignment
Deadline: 19/02/2021, 4pm. 25 points in total.
Instructions
• Please reply all questions below.
• If you make any additional assumptions, please state that clearly.
• You are allowed to use external resources (papers, books, Internet, etc), but you
have to provide a reference if you use anything beyond the papers indicated in
class.
• You are allowed to discuss the questions with your class-mates. Any discussion
must be reported in a brief acknowledgement after the corresponding
question that was discussed.
• You are not allowed to copy the answers of a friend/class-mate. This is
an individual exam, so you must work on your own replies.
• An unreported discussion or a reply that was directly copied will lead
to a 0 in the corresponding question.
• In all questions of this assignment, X corresponds to the last digit of your student
ID, or 1 if your last digit is 0. That is, if your student ID is 12345678, you have
to assume X = 8. If your student ID is 22345670, you have to assume X = 1.
We will use the symbol % to indicate mod. That is, for example: 3%2 = 1, and
4%2 = 0.
When we write 0.X we are concatenating “0.” with the digit X . For example, if
X = 3, then 0.X = 0.3.
• Please always show your calculations. Note that the final result depends on
X . You may lose marks in a question if you do not show your calculations, even
if the final result is correct.
1
0. What is your Student ID? What is the value of X ?
1. Write a rejection review of Marcolino, Passos et al (2016). That is, you must
summarise the paper, explain why it is not yet suitable for publication, and give
suggestions to the authors for further improvement. (10%)
2. You were hired by “Stuck with Stocks” Investment Company to help them with
team formation problems. They have a set of agents that they use to receive advice
on the best action to take given different economic scenarios. These agents, how-
ever, are not perfect, so there is a certain probability pi,j of agent i recommending
the action with j-th rank in a certain scenario. In order to increase the quality
of the predictions, “Stuck with Stocks” is currently using teams of 3 agents and
considering the recommendation of each agent as a vote. The recommendation of
the team is the one given by the plurality voting rule. A team can be composed
by different agents, and by copies of a certain agent. Consider that ties are broken
randomly.
“Stuck with Stocks” wants to get ready for a forthcoming economic scenario. Based
on simulations, they estimate the probabilities of the agents as follows:
Action 1 Action 2 Action 3
Agent 1 0.51 0.39 0.1
Agent 2 0.Y 0.2 1 - 0.2 - 0.Y
Agent 3 0.3 0.Y 1 - 0.3 - 0.Y
Where Y = X%6. Action 1 is the best action in these economic scenarios, and
Agent 1 is considered to be the “best agent”.
(a) Are the “necessary conditions” for a diverse team (composed by Agents 1,
2, and 3) to outperform a uniform team stated in Marcolino et al. (2013)
satisfied in this economic scenario? Why? (5%)
(b) What is the probability that a team composed by 3 copies of Agent 1 recom-
mends the best action (Action 1) for this economic scenario? And a diverse
team of 3 agents, composed by one copy of each agent? Which one of these
two teams you would recommend for “Stuck with Stocks”? (You can use a
calculator, but please show your calculations, e.g. 0.5∗0.3+0.7∗0.4...) (10%)
3. Consider the Boosting system with two levels of recursion shown below. For this
exercise, you must consider the version of Boosting described in Schapire (1990),
not the one described in Polikar (2012).
M1 M2 M3
M1.1 M1.2 M1.3 M2.1 M2.2 M2.3 M3.1 M3.2 M3.3
2
M1, M2, and M3 are the meta-classifiers; and the others are the base classi-
fiers (“Weak Learner”). Let’s consider that the original training data is: φ1 =
{(A, 1), (B, 1), (C, 0), (D, 1), (E, 1), (F, 1), (G, 0), (H, 1), (I, 0), (J, 0), (K, 0), (L, 1),
(M, 0), (N, 0), (O, 0), (P, 1), (Q, 1), (R, 0)}. The first element of a tuple is the item,
and the second element of a tuple is the item’s true label.
After training the first meta-classifier M1, another training data is used for train-
ingM2. For example: φ2 = {(A, 1), (C, 0), (C, 0), (E, 1), (K, 0), (O, 0), (P, 1), (P, 1),
(Q, 1), (R, 0)}.
• How was the dataset φ2 formed? Show a concrete example of M1’s out-
put across the training data that could lead to the training set φ2 in your
explanation. (5%)
• Let φ2.3 be the dataset used for training M2.3. How is that dataset formed?
Show which items form the oracle EX, which would be used when defining
the oracle EX3 for M2.3 in your explanation. (5%)
4. Based on Schapire (1990), why a concept is strongly learnable if and only if it is
weakly learnable? Please explain using your own words. (5%)
5. Which variation is proposed by Schapire (1990) in order to make the Boosting
algorithm run faster than its original version? Why is it faster? (5%)
6. In this exercise we will simulate by hand the result of training and executing an
AdaBoost system. When solving this exercise, you can simulate “in your head”
the result of randomness. I.e., if you have to pick 3 items with uniform probability,
you can just select any 3 items. If some item has a higher probability, you should
have a higher tendency of selecting it. Likewise, if you train a weak classifier with a
set of items, you can state which subset the weak classifier will correctly identify,
and which subset the weak classifier will commit mistakes. However, you must
make reasonable assumptions, i.e., the weak classifier is not able to learn the full
set of items, it is only slightly better than random guessing. (To be clear: you
are not being asked to implement any algorithm in the computer, but
just to “simulate” by hand the algorithms!)
Let’s assume the following training set, where the first element of a tuple is the
item, and the second element of a tuple is the item’s label.
(A, 1), (B,X%2), (C, 0), (D, 1), (E, 0), (F, (X + 1)%2)
Simulate AdaBoost with 3 classifiers, including training the system and the final
classification of 1 item. You must follow the equations in Freund and Schapire
(1996). Please clearly show all the steps of the algorithm, and your calculations.
(10%)
7. In Freund and Schapire (1996), how does the on-line allocation problem and pro-
posed solution (first part of the paper) relate to the development of the AdaBoost
algorithm (second part of the paper)? Why that was a significant advancement in
relation to what was done before? (5%)
8. Let’s consider now the ensemble system algorithm called Stacked Generalisation:
• Write a pseudo-code for the Stacked Generalisation meta-classifier. (5%)
3
• Consider that we have 3 classifiers available, and we are going to train and
test a Stacked Generalisation meta-classifier. We will assume the following
dataset:
(A, 0), (B,X%2), (C, 0), (D, 1), (E, 1), (F, (X + 1)%2), (G, 1), (H,X%2), (I, 0)
Items A,B,C,D,E, F are going to be used for training the meta-classifier,
and items G,H, I will be used for testing the meta-classifier. Show step-by-
step the full training and testing procedure. When training the meta-
classifier, use blocks of size 1. That is, the training set will be
divided in 6 blocks of 1 item each. Similarly to Question 6, you should
“simulate” the output of the individual classifiers. (10%)
9. Consider a set of 4 Neural Networks, using one-hot encoding in a classification
problem with 4 possible labels. Assume that when classifying an item, we obtained
the following result:
Output
Label 0 0.X
Label 1 0.3
Label 2 0.Y
Label 3 0.4
(a) Neural Network 1
Output
Label 0 0.Y
Label 1 0.4
Label 2 0.5
Label 3 0.25
(b) Neural Network 2
Output
Label 0 0.7
Label 1 0.05
Label 2 0.6
Label 3 0.Z
(c) Neural Network 3
Output
Label 0 0.1
Label 1 0.Z
Label 2 0.3
Label 3 0.5
(d) Neural Network 4
Assume Y = X%6, and Z = Y + 1.
We will interpret the output of the neurons as a ranking, and aggregate the rank-
ings using the Borda voting rule. What would be the final aggregated ranking?
Please show your calculations (5%)
10. Consider the team/ensemble success prediction system described in Marcolino,
Lakshminarayanan et al. (2016). Let’s assume that we have an ensemble of 3
classifiers (c0, c1, c2), working across batches of 4 items. The final decision of the
ensemble is given by the plurality voting rule. Let’s assume the following training
process for the team performance prediction, where we show the classifiers vote
across four batches, and the true label of the item they tried to classify:
4
c0 c1 c2 True Label
0 0 1 X%2
1 1 1 0
0 0 0 1
1 1 0 1
(a) Batch 1
c0 c1 c2 True Label
1 0 1 0
0 1 1 1
0 1 Y%2 1
1 1 0 0
(b) Batch 2
c0 c1 c2 True Label
1 1 1 1
1 0 0 0
0 X%2 1 1
1 0 0 Y%2
(c) Batch 3
c0 c1 c2 True Label
1 1 0 X%2
1 0 1 1
0 1 X%2 1
0 1 1 0
(d) Batch 4
We assume that Y = X + 1.
Show one full training iteration of the team prediction methodology using stochas-
tic gradient descent. That is, show one update of the weights of the team predic-
tion classifier for each batch. Assume that weights start as 0.1 (βi = 0.1), and
the learning rate is 0.01 (α = 0.01). As usual in stochastic gradient descent, you
can process the batches in any order. Note: Recall that the stochastic gradient
descent algorithm was described in the Coursework 10 of SCC461.(10%)
11. Consider the following game between two agents A and B. A is the column player
(with actions A1 and A2), and B is the row player (with actions B1 and B2). The
payoffs are shown below; the first number is for agent B and the second for agent
A.
A1 A2
B1 X , 6 0, 4
B2 4, 7 5,X ∗ 2
(a) What is the pure strategy Nash equilibrium of this game? Why? (2%)
(b) Is there a mixed strategy Nash equilibrium for this game where agent A takes
action A1 with probability p, and agent B takes action B1 with probability
q? What are the values of p and q? (0 < p < 1, 0 < q < 1) (3%)
(c) Consider now the payoffs below:
A1 A2
B1 5, 2 1, 5
B2 1000, 1 −20, 1.1
• If this were a Stackelberg game1, where the row player (B) is allowed
to commit to a pure strategy (not to a mixed strategy), which strategy
should it commit to? Why? (1%)
• Can the row player perform even better by committing to a mixed strat-
egy? Why? (4%)
1We will discuss Stackelberg games in Week 16.
5