UNIVERSITY OF EDINBURGH
COLLEGE OF SCIENCE AND ENGINEERING
SCHOOL OF INFORMATICS
INFR08010 INFORMATICS 2D: REASONING AND AGENTS
Monday 1 st May 2017
09:30 to 11:30
INSTRUCTIONS TO CANDIDATES
1. Answer Parts A, B and C.
2. The multiple choice questions in Part A are worth 50% in total and
are each worth the same amount. Mark one answer only for each
question - multiple answers will score 0. Marks will not be deducted
for incorrect multiple choice exam answers.
3. Parts B and C are each worth 25%. Answer ONE question from Part
B and ONE question from Part C.
4. Use the special mark sheet for Part A. Answer Parts B and C each in
a separate script book.
CALCULATORS ARE PERMITTED.
Convener: I. Simpson
External Examiner: I. Gent
THIS EXAMINATION WILL BE MARKED ANONYMOUSLY
Part A
ANSWER ALL QUESTIONS IN PART A. Use the special mark sheet.
1. Which one of the following cannot be deemed an agent?
(a) Laptop computer
(b) Air conditioner
(c) Aeroplane
(d) Roulette wheel
(e) Toy robot
2. Which of the following types of agents does not fit into the standard classification
of agents that was used in the course?
(a) Model-Based Reflex Agent
(b) Schema-Based Agent
(c) Simple Reflex Agent
(d) Utility-Based Agent
(e) Goal-Based Agent
3. Which of the following sets of first-order literals cannot be unified?
(a) {P (C, x1), P (x2, B(x2))}
(b) {P (C1, x1), P (x2, C2)}
(c) {P (C,B(x1)), P (x2, x2)}
(d) {P (C1, x), P (C1, C2)}
(e) {P (C, x1), P (x2, B(C))}
4. Which of the following is not part of the DPLL algorithm, as given in Russell &
Norvig and the lectures?
(a) A sentence is false if any of its clauses is false.
(b) A pure symbol can be set to false.
(c) A clause is true if any of its symbols is true.
(d) A pure symbol can be set to true.
(e) A symbol in a unit clause can be set to true.
5. Consider depth-first and breath-first search for finite maximum branching factor.
Which of the following statements is true?
(a) Both are optimal.
(b) Both are complete.
(c) Both have exponential worst-case time complexity.
(d) Neither is optimal.
(e) Iterative deepening search is the only practical way to use them (in combina-
tion).
Page 1 of 9
6. If the domain of a variable becomes empty, what does the AC-3 algorithm do?
(a) The CSP is indicated as satisfiable.
(b) The domain is refilled by default values.
(c) Nothing, because this cannot happen.
(d) The algorithm returns a truth value.
(e) The last arc is removed and the algorithm proceeds with other arcs.
7. Which statement on adversarial search using the α-β algorithm is correct?
(a) The α-β algorithm fails, if the opponent does not play optimally.
(b) In the α-β algorithm, pruning may cause the final result to be suboptimal.
(c) With perfect ordering, time complexity is quadratic.
(d) Backing up the minimax values through the tree can be omitted in most
practical applications.
(e) Space complexity is better than iterative deepening search.
8. Which of the following statements about Hierarchical Task Network (HTN) Plan-
ning is correct?
(a) All refinements of high-level actions (HLAs) are a sequence of only primitive
actions.
(b) A refinement of an HLA always has the same preconditions as the HLA.
(c) If no states in the pessimistic description of the reachable states of a high-level
plan (HLP) satisfy the goal, then that HLP is guaranteed to fail.
(d) If A is in the Effect of one refinement of HLA h, and ¬A is in another
refinement of h, then +˜A is in Reach+(s, h).
(e) An HLA that achieves a goal has the downward refinement property only if
all its implementations also achieve that goal.
9. Which of the following is not equivalent to the formula A⇔ B ∨ C?
(a) (A ∨ ¬B ∨ ¬C) ∧ (B ∨ ¬A) ∧ (C ∨ ¬A)
(b) (¬A ∨B ∨ C) ∧ (¬B ∨ A) ∧ (¬C ∨ A)
(c) (¬A ∨B ∨ C) ∧ ((¬B ∧ ¬C) ∨ A))
(d) (A⇒ (B ∨ C)) ∧ ((B ∨ C)⇒ A))
(e) (¬A ∨B ∨ C) ∧ (¬(B ∨ C) ∨ A)
10. Which of the following is true of the WALKSAT algorithm?
(a) It deletes clauses that contain pure literals.
(b) It uses the min-conflict heuristic to minimize the number of satisfied clauses.
(c) It is complete for finite problems.
(d) It initially assigns True to all symbols
(e) With a certain probability, it flips the sign of a symbol in order to maximize
the number of satisfied clauses.
Page 2 of 9
11. Forward chaining vs. backward chaining: Which of the following is usually wrong?
(a) If the branching factor in backward direction is relatively large, use forward
chaining.
(b) If a large amount of data is given, backward chaining is not an option.
(c) If some data are missing, backward chaining is a reasonable option.
(d) If goals are to be suggested by the system, use forward chaining
(e) If the goal is known, use backward chaining
12. Which one of the following best describes the closed-world assumption in plan-
ning?
(a) Nothing changes without a known reason.
(b) Every plan will ultimately lead to (one of) the goal(s).
(c) Every change is due to the agent.
(d) Known facts cannot change at the next time step.
(e) There is a one-to-one correspondence between states and situations.
13. Which is the negation of the sentence “All birds are black”?
(a) There exists a bird and it is black.
(b) There exists a bird and it is white.
(c) If a bird exists, then it is not black
(d) There exists a bird and it is not black.
(e) All birds are not black.
14. You are given four action descriptions with conditional effects:
Action (A,Precond : {P,X} , Effect : {(whenZ : ¬X)})
Action (B,Precond : {¬Y } , Effect : {(whenP : Z,X)})
Action (C,Precond : {Z} , Effect : {(whenP : ¬Y ), X})
Action (D,Precond : {¬X} , Effect : {(whenP : Z,X)})
What state would result from executing the action sequence [D,C,B,A] in the
state {P, Y }?
(a) The plan is not executable because the preconditions for action A are not
met.
(b) {P,Z}
(c) {P,X,Z}
(d) {P,¬X,¬Y, Z}
(e) {P,X, Y, Z}
15. Which of the following is not generally true if the relation KB α holds
(a) KB contains a sentence that implies α
(b) Every model of KB is also a model of α
(c) KB ⇒ α
(d) Resolution of some sentences in KB and ¬α produces the empty clause.
(e) ¬α entails ¬KB
Page 3 of 9
16. In a vocabulary with only 3 propositional symbols A, B, and C, how many models
are there for the sentence A⇒ B?
(a) 0
(b) 1
(c) 2
(d) 3
(e) 6
17. The following formula describes the “forward” part of the algorithm used for
prediction in temporal probabilistic models:
f1:t+1 = αP (et+1|Xt+1)

xt
P (Xt+1|xt)P (xt|e1:t)
Which of the following statements is incorrect?
(a) α is the normalisation factor
(b) P (xt|e1:t) is the current state distribution
(c) P (xt+1) depends on both P (xt) and P (et)
(d) P (et+1|Xt+1) is obtained from the sensor model
(e) P (Xt+1|xt) is obtained from the transition model
18. Consider the following PDDL action description in a Blocks World application:
Action(Move(b, x, y),
Precond: On(b, x) ∧ Clear(b) ∧ Clear(y)
Effect: On(b, y) ∧ Clear(x) ∧ ¬On(b, x) ∧ ¬Clear(y))
and the state S1 defined as follows:
S1 : {On(A,B), On(B,C), Clear(A), Clear(D)}
where
• Move(b, x, y) is the action of moving block b from block x to block y.
• On(b, x) means block b is on block x.
• Clear(x) means block x has nothing on top of it.
Which of the following statements is incorrect?
(a) Applying Move(A,B,D) in S1 would result in state
{On(A,D),¬On(A,B),¬Clear(D), Clear(B)}
(b) Move(A,B,D) can be applied in S1
(c) You need a sequence of at least two actions to achieve the goal On(B,D).
(d) [Move(A,B,D),Move(B,C,A)] can be applied in S1
(e) Move(B,C,D) cannot be applied in S1
Page 4 of 9
19. Which of the following is assumed to be true for naive Bayes classifier?
(a) It applies to connected nodes in a Bayesian network.
(b) Effects are assumed to be independent.
(c) Different causes are assumed to be independent.
(d) It is exact for binary random variables.
(e) A common cause fully explains the dependencies among the effects.
20. Suppose that we model a rational agent as a Markov Decision Process (MDP).
Then which of the following statements is incorrect?
(a) Identifying the action that will be performed next may be non-deterministic.
(b) The agent’s expected utilities change over time only because his beliefs do.
(c) Agents that are modelled as MDPs will value long-term rewards over imme-
diate rewards.
(d) MDPs require that all possible actions and all their possible outcomes be
known right from the start of the dynamic process.
(e) The value iteration algorithm calculates the (current) expected utility of a
state for a rational agent.
Page 5 of 9
Part B
ANSWER ONE QUESTION FROM PART B
1. Resolution and Modus Ponens
(a) State the binary resolution rule and explain how it can be applied to first
order clauses. [5 marks ]
(b) Given the following axioms
A ∨B ∨ C,
A ∨B ∨ ¬C,
A ∨ ¬B ∨ C,
A ∨ ¬B ∨ ¬C,
¬A ∨B,
¬A ∨ ¬B ∨ C,
show by resolution that A ∧B ∧ C. [6 marks ]
(c) Assuming that Modus Ponens is sound, prove that Generalized Modus Po-
nens (GMP) is also a sound rule of inference for first order logic. [4 marks ]
(d) Give an algorithm for forward-chaining using Generalised Modus Ponens. [4 marks ]
(e) Describe two possible ways in which the efficiency of the forward-chaining
algorithm from the previous question can be improved. In each case, you
need to explain briefly what the issue is and how the improvement addresses
it. [6 marks ]
2. Planning
In a system for robotic office delivery, two robots A and B have to transport
packages P and Q between different rooms X, Y and Z given the following
initial (left) and goal (right) states:
Each robot can carry any number of packages, the robots can move simultane-
ously, and they can both be in any room or any corridor at the same time.
QUESTION CONTINUES ON NEXT PAGE
Page 6 of 9
QUESTION CONTINUED FROM PREVIOUS PAGE
We use the following predicates to describe the domain:
• In(o, r) – robot or package o is in room r
• Holds(r, p) – robot r is carrying package p
• OnFloor(p) – package p is on the floor (i.e. no robot is carrying it)
• Robot(r) denotes that r is a robot
• Package(p) denotes that p is a package
• Room(rm) denotes that rm is a room
• Corridor(rm1; rm2) – there is a corridor between rm1 and rm2
(a) Using the above PDDL vocabulary, define the initial state and goal. [4 marks ]
(b) Specify action schemata for the following actions:
i. Go(r; rm1; rm2): robot r moves from room rm1 to room rm2 connected
by a corridor, moving anything it is potentially carrying to rm2 with it [2 marks ]
ii. Drop(r, p): robot r drops package p; resulting in the package being on
the floor [2 marks ]
iii. Pickup(r, p): the robot picks up a package from the floor; as a result,
the robot is holding the package [2 marks ]
(c) Define the actions, orderings, links and open preconditions of a partial-order
plan that solves the above planning problem. What is the main advantage
of using partial-order planning in this particular domain? [6 marks ]
(d) Now suppose that there is a light switch in each room, and the robots can
pick up a package if the light in the room is on. Furthermore, to move out
of a room (and into another), the room’s light must be on. The robots can
also flip a light switch if they are in the same room as the switch.
i. Define the predicates you need to express the light in the room being
on (or off, which you can assume is the same as not on). [4 marks ]
ii. Define the action schema Flip, of a robot flipping a light switch in the
room it’s in, which results in the light being on if it was off, and off if
it was on. [2 marks ]
iii. Assume that in the initial state, the lights are off in all rooms, and the
switches in rooms X, Y and Z are respectively SX , SY and SZ . Then
define a total ordered plan that solves the above planning problem. [3 marks ]
Page 7 of 9
Part C
ANSWER ONE QUESTION FROM PART C
Consider a variation of the Tic-Tac-Toe game which is identical in terms of legal
moves, but is extended by the following rules:
• For a pair, i.e. two symbols in a row (horizontally or vertically), the player
obtains 2 points. For a triplet, i.e. three symbols in a row (horizontally,
vertically, or diagonally), the player obtains 5 points in addition to the two
times 2 points for the two pairs, i.e. a total of 9 points.
• The game terminates when a triplet has been completed or all squares have
been filled. The winner is the player who accumulates the highest number
of points.
In the root node of the following game tree, for example, the Max and Min players
would both score 2 as each player has completed one pair (we write 2/2 to denote
this intermediate score for Max/Min). In the rightmost leaf node, Max wins with
a score of 13 points (one triplet and four pairs), and Min loses with 2 points for
a single pair.
(a) Label all nodes in the tree given above with the respective intermediate/final
scores using the Max/Min format. [8 marks ]
(b) Apply the minimax algorithm to compute the optimal strategy for player
Max. You can specify the strategy by the first choice of Left-Middle-Right
moves that is taken by Max. For full marks, your answer should explain the
reasoning applied by the algorithm in detail. [7 marks ]
(c) Consider an incomplete strategy where the search only expands the tree
from Max’s point of view to depth 1 (i.e. only the first three nodes below
the root node are explored).
QUESTION CONTINUES ON NEXT PAGE
Page 8 of 9
QUESTION CONTINUED FROM PREVIOUS PAGE
i. If the heuristic was based on counting the number of pairs achieved so
far (which we call the Pairs heuristic), would this lead Max to make
the right choice in the root node? Justify your answer. [5 marks ]
ii. As an alternative to the Pairs heuristic, consider the heuristic Oppo-
nentGaps which discounts the value of the state by 1 for every empty
square Min could fill in her move that is adjacent to one or more O
symbols (e.g. the leftmost node under the root node would obtain a
value of -1). Note, that diagonal neighbours are not considered adja-
cent here. Would this work better or worse for Max than Pairs when
making a decision in the root node? Your answer should include a brief
explanation. [5 marks ]
2. Bayesian Inference
Consider a population of women, 50% of whom are over 40. If they are over 40
years old, then 10% of them have breast cancer. But only 5% of women under
40 have breast cancer. 70% of the women with breast cancer will have a positive
mammography (meaning the test indicates she has cancer). 20% of women who
do not actually have breast cancer also get a positive mammography (meaning
that they are incorrectly diagnosed with cancer).
(a) Model the above description as a Bayes Net, using Boolean variables and
conditional probability tables. [7 marks ]
(b) Using your Bayes Net, calculate the likelihood that a woman under 40 who
has had a negative mammography has breast cancer. Be sure to show each
(c) Do you need to know the prior probability that a woman is under 40 to
compute the posterior probability that a woman having breast cancer gets
(d) Suppose that a woman, who is under 40, gets a positive mammography test,
M1, but goes on to have a second mammography, M2, that turns out to be
negative. Use the Naive Bayes assumption to compute the likelihood that
she has breast cancer. [7 marks ]
Page 9 of 9
Solutions A
1. d Roulette wheel
2. b Schema-Based Agent
3. c {P (C,B(x1)), P (x2, x2)}
4. b A pure symbol can be set to false.
5. c Both have exponential worst-case time complexity.
6. d The algorithm returns a truth value.
7. e Space complexity is better than iterative deepening search.
8. d If A is in the Effect of one refinement of HLA h, and ¬A is in another
refinement of h, then +˜A is in Reach+(s, h).
9. a (A ∨ ¬B ∨ ¬C) ∧ (B ∨ ¬A) ∧ (C ∨ ¬A)
10. e It flips the sign of a symbol in order to maximize the number of satisfied
clauses.
11. b If a large amount of data is given, backward chaining is not an option.
12. a Nothing changes without a known reason.
13. d There exists a bird, but it is not black.
14. b {P,Z}
15. a KB contains a sentence that implies α
16. e 6
17. c P (xt+1) depends on both P (xt) and P (et)
18. a ApplyingMove(A,B,D) in S1 would result in state {On(A,D),¬On(A,B),¬Clear(D), Clear(B)}
19. e A common cause fully explains the dependencies among the effects.
20. c Agents that are modelled as MDPs will value long-term rewards over
immediate rewards.
Page 10 of 9
Part B
1. Resolution and Modus Ponens
(a) Stating the binary resolution rule , propositionalisation , unification ,
CNF , resolution or factoring or standardising-apart variables 
(b) Negated goal: ¬A ∨ ¬B ∨ ¬C
A∨B∨C,A∨B∨¬C
A∨B ,
A∨¬B∨C,A∨¬B∨¬C
A∨¬B ,
¬A∨¬B∨C,¬A∨¬B∨¬C
¬A∨¬B 
A∨B,A∨¬B
A
,
¬A∨B,¬A∨¬B
¬A ,
A,¬A

Other solutions are possible.
bookwork We need to show that p′1, . . . , p

n, (p1 ∧ . . .∧ pn ⇒ q) qθ provided p′iθ = piθ
for all i and some most general unifier θ.
Proof : For any sentence p, we have p pθ by the Universal Instantiation
rule. Using this (and a few basic rules), we thus have:
i. (p1 ∧ . . . ∧ pn ⇒ q) (p1 ∧ . . . ∧ pn ⇒ q)θ = (p1θ ∧ . . . ∧ pnθ ⇒ qθ)
ii. p′1, . . . , p

n p′1∧. . .∧p′n (p′1∧. . .∧p′n)θ = p′1θ∧. . .∧p′nθ = p1θ∧. . .∧pnθ
since by definition of GMP, we have p′iθ = piθ for all i.
iii. From 1(b)i and 1(b)ii, qθ follows by Modus Ponens.
Mark breakage: 1 mark for stating what is being proved, 1 mark for each of
the steps 1(b)i to 1(b)iii in the proof.
(c) (Bookwork) The following algorithm is an acceptable answer:
Note that the algorithm does not need to be given exactly as above. Al-
ternative answers not in pseudo-code but describing the main aspects are
acceptable.
(d) Any two of the following three improvements are acceptable [3 marks each]:
Page 11 of 9
i. Incremental forward-chaining: In this, at each iteration t, we check a
rule only if its premise includes a conjunct pi that unifies with a fact pi
newly inferred at iteration t−1. The rule-matching step then fixes pi to
match with p′i from the FC algorithm, but allows the other conjuncts of
the rule to match with facts from any previous iteration. This algorithm
generates exactly the same facts at each iteration as FC, but is much
more efficient.
ii. Avoid irrelevant facts: Forward chaining makes all allowable inferences
based on the known facts, even if they are irrelevant to the goal at hand.
One solution is to restrict forward chaining to a selected subset of rules.
Another solution, from the field of deductive database, is to rewrite the
rule set, using information from the goal, so that only relevant variable
bindings — those belonging to a so-called magic set — are considered
during forward inference.
iii. Improved conjunct ordering: This involves finding an ordering to solve
the conjuncts of the rule premise so that the total cost of matching
against facts from the knowledge base is minimized. It turns out that
finding the optimal ordering is NP-hard, but good heuristics are avail-
able. For example, the minimum-remaining-values (MRV) heuristic
used for CSPs can be applied.
2. Planning
(a) Initial state is:
Robot(A) ∧Robot(B) ∧ Package(P ) ∧ Package(Q)∧
OnFloor(P ) ∧OnFloor(Q) ∧Room(X) ∧Room(Y ) ∧Room(Z)∧
Corridor(X, Y ) ∧ Corridor(X,Z) ∧ Corridor(Y,X) ∧ Corridor(Z,X)∧
In(A,X) ∧ In(B,X) ∧ In(P,X) ∧ In(Q,X)
Goal state is:
In(A,X) ∧ In(B,X) ∧ In(P, Y ) ∧ In(Q,Z)
Deduct 0.5 points for each error, including unnecessary literals and missing
literals. Deduct 1 point if they use negation!
(b) i. Action(Go(r, rm1, rm2),
Precond:Robot(r), Room(rm1), Room(rm2), In(r, rm1), Corridor(rm1, rm2)
Effect: In(r, rm2),¬In(r, rm1)(whenHolds(r, p) : ¬In(p, rm1), In(p, rm2))
ii. Action(Drop(r,p),
Precond: Robot(r), Package(p), Holds(r, p)
Effect¬Holds(r, p), OnF loor(p))
Page 12 of 9
iii. Action(PickUp(r,p)
Precond: Robot(r), Package(p), OnF loor(p)
Effect: ¬Holds(r, p), Holds(r, p))
(c) Actions:{Start, F inish, P ickup(A,P ), P ickup(B,Q),
Move(A,X, Y ),Move(B,X,Z),
Drop(A,P ), Drop(A,Q),Move(A, Y,X),Move(B,Z,X)}
Orderings:{Pickup(A,P ) ≺Move(A,X, Y ),Move(A,X, Y ) ≺ Drop(A,P ),
Drop(A,P ) ≺Move(A, Y,X)
Pickup(B,Q) ≺Move(B,X,Z),Move(B,X,Z) ≺ Drop(B,Q),
Drop(B,Q) ≺Move(B,Z,X)}
Links: {Pickup(A,P ) Holds(A,P )−→ Move(A,X, Y ),Move(A,X, Y ) In(P,Y )−→ Drop(A,P ),
Drop(A,P )
In(A,Y )−→ Move(A, Y,X),P ickup(B,Q) Holds(B,Q)−→ Move(B,X,Z),
Move(B,X,Z)
In(Q,Z)−→ Drop(B,Q),Drop(B,Q) In(Q,Z)−→ Move(B,Z,X)}
Open preconditions: {}
Deduct 1 point for each incorrect action, order or link. Deduct 0.5 points
for missing out the open preconditions and/or missing Start and Finish.
(d)
i. Switch(x): x is a light switch
Flip(x, y): robot x flips switch y
On (r): light in room r is on.
Deduct 1 point if they definite a predicate Off that is logically unrelated
to On.
ii. Action: Flip(x, y, r)
Precond: Robot(x), Switch(y), In(x, r), In(y, r)
Effect: (when on(r) : ¬on(r)), (when¬on(r) : on(r)))
Deduct 0.5 point for getting the argument to On wrong (i.e., the switch
rather than the room; the former will cause problems for finding solu-
tions to the planning problem). Deduct 1 point for not including the
conditional effects.
iii. There are lots of plans that work. Here is one:
[Flip(A, SX);Pickup(A,P );Pickup(B,Q);Move(A,X, Y );
Move(B,X,Z);Drop(B,Q);Drop(A,P );Flip(A, SY );Flip(B, SZ);
Move(A, Y,X);Move(B,Z,X)]
Deduct 1 point for each error
Page 13 of 9
Part C
(a) The annotations are shown in the following annotated diagram of the game
tree:
13/26/26/213/2
4/24/24/24/116/26/11
6/2 4/2 4/2
Half a mark for each correct enumerator plus 1.5 marks for the denominators.
(b) The correct strategy is RL. For full marks, the answer should trace the
reasoning required to come up with this strategy.
i. No, the Max player would choose L in the first step (one mark), as the
heuristic value for this state (3) is higher than for M (2) and R (2) (two
marks), and would lose to Min in the next step (two marks).
ii. This would work (one mark), as it would keep Max away from L and
M (which would have a value of -1, while R has a value of 0) (three
marks), and Max would chose the right strategy (one mark).
2. Bayesian Inference
(a) The Boolean variables are: O (o if a woman is over 40); B (b if a woman has
breast cancer); M (m if the mammography test came out positive). The
Bayes Net should be as shown in Figure 1.
Deduct 1 point for not having the right variables; 2 points for each incorrect
dependency, and 2 points for each mistake in the CPTs.
Page 14 of 9
BO
M P (o)
0.5
O P (B)
o 0.1
¬o 0.05
B P (m)
b 0.7
¬b 0.2
Figure 1: The Bayes Net for the Breast Cancer Problem
(b)
~P (B|¬m,¬o) = αP (¬o)~P (B|¬o)~P (¬m|B)
P (b|m, o) = αP (¬o)P (b|¬o)P (¬m|b)
= α ∗ 0.5 ∗ 0.05 ∗ 0.3
= α ∗ 0.0075
P (¬b|¬m,¬o) = αP (¬o)P (¬b|¬o)P (¬m|¬b)
= α0.5 ∗ 0.95 ∗ 0.8
= α ∗ 0.380
~P (B|¬m,¬o) = α〈0.008, 0.380〉
= 〈0.021, 0.979〉
P (b|¬m,¬o) = 0.021
Deduct 2 points for multiplying the wrong probabilities; deduct 1 point for
each wrong table lookup, and deduct 1 point for each arithmetic error but
don’t penalise for the way those errors percolate through the calculations.
(c) No, you don’t need to know this, because M is conditionally independent of
O given B (as shown in the Bayes Net).
Give 1 point for getting the right answer, 1 point for saying “conditional
independence”.
Page 15 of 9
(d)
P (b|m1,¬m2,¬o) = αP (¬o)P (b|¬o)P (m|b)P (¬m|b) by Naive Bayes
= α0.5 ∗ 0.05 ∗ 0.7 ∗ 0.3
= α ∗ 0.005
P (¬b|m1,¬m2,¬o) = αP (¬o)P (¬b|¬o)P (m|¬b)P (¬m|¬b)
= α ∗ 0.5 ∗ 0.95 ∗ 0.2 ∗ 0.8
= α ∗ 0.076
P (b|m1,m2,¬0) = 0.0050.005+0.076
= 0.062
Give 3 points for getting the formula right, and the remaining 4 points for
getting it all right. If they make a mistake, but note something must be wrong
when compared with the probability in part (b) and the likelihoods given by
the CPTs, then give them 1 bonus point for knowing it’s wrong!
Page 16 of 9 