无代写-ROB 456
时间:2022-10-25
ROB 456: Intelligent Robots
Bayes’ rule in action: Using sensors to update state
estimations
AKA: I think I saw a door open – did I really see a door
open? I’m so unsure…
Why is Bayes’ Rule so powerful?
• We can use it to compute probabilities of events that we cannot directly
observe
1
Bayes' Theorem
• Robot is in front of a door. It is trying to determine if the door is open or
closed. It has a magic sensor that returns True if the door is open, False if
the door is closed. Unfortunately, the sensor sucks (like most sensors) so
the sensor is often wrong.
• Bayes’ theorem: we have two random variables
– Door: Door = open or Door = closed
– Z (our sensor): Z=True or Z=False
2
Running example for these slides
• Robot is in front of the door. We have some probability of the door being open
or closed (initially, usually equal probability since we don’t know anything – 0.5
and 0.5)
• Take a sensor reading
• Apply probability rules to calculate a NEW value for the probability of the door
being open:
– P(open | z) and P(closed | z)
▪ By laws of probability, btw, P(closed) = 1 – P(open)
• Gives us a new value for the probability of being open P(open)
• Now you can take another sensor reading… or go through the door… or try to
open the door, but key idea:
– Improve your estimate of the state (door open or closed) by using a sensor (z =
true/false)
– Use Bayes’ rule to push the probability values around
▪ We need to know what the probabilities are – Bayes’ rule lets us JUST estimate the
probability/noise of the sensor, instead of trying to estimate the probability of the door
being open/closed
3
Where we’re going/what we’re doing, Part 1
• We need values for our probabilities. Some are easier to get than others
• Think about an open door sensor. It will be wrong (sensors always are) but
can we calculate:
– The probability of it returning true if the door is open P(z=True | Door = open)
▪ I can open the door and take a sensor reading lots of time and see how often the
sensor is wrong (empirical measurement of sensor probabilities)
– I can close the door and measure the number of times it’s wrong as well
• To directly model the likelihood of z being wrong….
– P(Door = open | z=True)
– I’d have to have a complete model of the physical/algorithmic behavior of the
sensor in order to estimate how/why the sensor returns that value... if I knew
that, I’d just improve the sensor
4
Where we’re going/what we’re doing, Part 2
Sensor: Could be a camera & an algorithm to analyze image
or a contact sensor on the door…
• Re-write the above theorem with our Door and Sensor variables
– What is the likelihood (in English?)
– What is the prior? Evidence?
– Which probability values do we have to know? Which are we calculating with
this formula?
▪ You will need the Laws of Total probability from the Probability slides to get P(y)
(see next slide if you get stuck)
▪
It might be helpful to add time to this - “before” and “after” the sensor
reading
5
Exercise activity: Bayes’ theorem labeling
Normalization (defining η – the denominator)
Recall from the Law of total Probability, Marginals slide:
6
We’ll see that in practice we never need to directly
calculate/measure P(y) – we just sum up our probabilities
for each state.
• P(open|z) is diagnostic.
• P(z|open) is causal.
• Often causal knowledge is easier to obtain
– This is measurement error – how often the sensor is right/wrong
• Bayes rule allows us to convert causal knowledge to diagnostic
7
Causal vs. Diagnostic Reasoning
Count frequencies!
• State Door variable is either open or not open
– Initially have no idea, so set P(open) = P(not open) = 0.5
▪ Remember, they have to sum to 1
• We need FOUR probabilities for the sensor P(z=True | Door = open), and the
three other combinations of True/False and open/closed
– Because of the law of marginal probabilities, we actually only need to determine 2
empirically – the others can be calculated
▪ P(z = False | Door = open) = 1 – P(z = True | Door = open)
▪ P(z = False | Door = closed) = 1 – P(z = True | Door = closed)
• Notice that P(z=True | Door = open) + P(z = True | Door = closed ) does NOT
need to sum to one
– Think of P(z = True) = P(z=True | Door = open) + P(z = True | Door = closed ) as a
combined measure of true positives and false positives
• We DON’T need to calculate the closed condition – we can get it from (1 – open)
– but it’s a good idea to check your math…
• Look for the Law of Marginal probabilities being applied
8
A worked out example
Dif
fer
en
t se
nso
rs
me
an
dif
fer
en
t
val
ue
s…
• These are our measured probabilities
• Should the add up to 1?
• NO!
9
Example: Sensor probabilities
• Apply Bayes Rule
• What are my priors? I don’t know which is more likely, so…
10
Example: Applying Bayes’ rule
• Apply Bayes Rule
• Measurement z raises our belief that the door is open
11
Example: Putting in the probabilities for each term
12
Example: Summary
Prior – in this case you don’t know,
so maximally uninformative
z being true raises
probability that
door is open
z returns True more often when the door is open
Applying
our Law of
Marginal
probabilitie
s to get p(z)
• What about P(closed)?
• Or use what you know:
13
Example: What about the closed probability?
• Robots typically have more than one sensor. Suppose we add another
sensor - this one with different measurement error.
– How do we incorporate that knowledge?
14
If one is good, two must be better…
15
Example: Second Measurement
Prior – from previous
update calculation
z2 lowers probability
that door is open
Note – different sensor means
different probabilities
In this case, z being True
means the door is probably
closed
z1 is first sensor, z2 is this one
Combining Evidence
• Suppose our robot obtains another observation from another sensor z3?
• How can we integrate this new information?
• More generally, how can we estimate P(x| z1...zn )?
16
Markov assumption: zn is independent of z1,...,zn-1 if we know x.
17
Recursive Bayesian Updating
This is the math used to justify
simply applying the result of the
second sensor; independence
doesn’t really hold in real life, but
generally is “safe” with some
caveats…
• In the previous slide we explicitly said these are different sensors
• What if we use the same sensor but just take a second reading?
– Markov assumption – does it hold? Why or why not?
18
Exercise activity: Multiple sensor readings
• Two possible locations x1 and x2
• P(x1)=0.99
• P(z|x2)=0.09 P(z|x1)=0.07
19
A Typical Pitfall (same sensor over and over)
Initializations are important
AND
Understand your sensor model
• What would be some reasonable sensors
we could use to help the robot update
where it thinks it is?
– Part 1: What are some reasonable
real-world sensors?
– Part 2: How do you convert those sensor
readings into something useful for y in
Bayes’ equation?
▪ How does this differ if you’re using the
Room versus the Bin random variable?
20
Exercise activity: Sensors
Kitchen
Dining room
Living
room
Bedroom
Robot
• Sensor reading: A position pos = x,y
• Let x be which Room, pos is a physical
location in the room:
– P(robot being in Dining room given that its
position pos = 1,1) = [fill in here]
• How would you find reasonable estimates
for those values?
– To make it easier, let position be a finite
set of positions, rather than a continuous
location
• Which is easier to write the probabilities
for? P(x|y) or P(y|x)?
21
Exercise activity: Spell Bayes’ rule out in English for a
sensor reading
Kitchen
Dining room
Living
room
Bedroom
Robot