CSCI 4150: -无代写
时间:2025-03-14
Announcements
▪ Homework 3: MDP and RL
▪ Has been released, due Monday 2/24, at 11:59pm
Reinforcement Learning:
Model-Based and Model-Free Learning
CSCI 4150: Introduction to Artificial Intelligence (Spring 2025)
Oshani Seneviratne
Assistant Professor in Computer Science
senevo@rpi.edu
February 11, 2025
Acknowledgment:
Most of the content in these slides was adapted from the slides created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
All CS188 materials are available at http://ai.berkeley.edu.
Reinforcement Learning
▪ We still assume an MDP:
▪ A set of states s  S
▪ A set of actions (per state) A
▪ A model T(s,a,s’)
▪ A reward function R(s,a,s’)
▪ Still looking for a policy (s)
▪ New twist: don’t know T or R, so must try out actions
▪ Big idea: Compute all averages over T using sample outcomes
Model-Based Learning
Model-Based Learning
▪ Model-Based Idea:
▪ Learn an approximate model based on experiences
▪ Solve for values as if the learned model were correct
▪ Step 1: Learn empirical MDP model
▪ Count outcomes s’ for each s, a
▪ Normalize to give an estimate of
▪ Discover each when we experience (s, a, s’)
▪ Step 2: Solve the learned MDP
▪ For example, use value iteration, as before
Example: Model-Based Learning
Input Policy 
Assume:  = 1
Observed Episodes (Training) Learned Model
A
B C D
E
B, east, C, -1
C, east, D, -1
D, exit, x, +10
B, east, C, -1
C, east, D, -1
D, exit, x, +10
E, north, C, -1
C, east, A, -1
A, exit, x, -10
Episode 1 Episode 2
Episode 3 Episode 4
E, north, C, -1
C, east, D, -1
D, exit, x, +10
T(s,a,s’).
T(B, east, C) = 1.00
T(C, east, D) = 0.75
T(C, east, A) = 0.25

R(s,a,s’).
R(B, east, C) = -1
R(C, east, D) = -1
R(D, exit, x) = +10

Example: Expected Age
Goal: Compute expected age of CSCI 4150 students
Unknown P(A): “Model Based” Unknown P(A): “Model Free”
Without P(A), instead collect samples [a1, a2, … aN]
Known P(A)
Why does this
work? Because
samples appear
with the right
frequencies.
Why does this
work? Because
eventually you
learn the right
model.
Model-Free Learning
Passive Reinforcement Learning
Passive Reinforcement Learning
▪ Simplified task: policy evaluation
▪ Input: a fixed policy (s)
▪ You don’t know the transitions T(s,a,s’)
▪ You don’t know the rewards R(s,a,s’)
▪ Goal: learn the state values
▪ In this case:
▪ Learner is “along for the ride”
▪ No choice about what actions to take
▪ Just execute the policy and learn from experience
▪ This is NOT offline planning! You actually take actions in the world.
Direct Evaluation
▪ Goal: Compute values for each state under 
▪ Idea: Average together observed sample values
▪ Act according to 
▪ Every time you visit a state, write down what the
sum of discounted rewards turned out to be
▪ Average those samples
▪ This is called direct evaluation
Example: Direct Evaluation
Input Policy 
Assume:  = 1
Observed Episodes (Training) Output Values
A
B C D
E
B, east, C, -1
C, east, D, -1
D, exit, x, +10
B, east, C, -1
C, east, D, -1
D, exit, x, +10
E, north, C, -1
C, east, A, -1
A, exit, x, -10
Episode 1 Episode 2
Episode 3 Episode 4
E, north, C, -1
C, east, D, -1
D, exit, x, +10
A
B C D
E
+8 +4 +10
-10
-2
Problems with Direct Evaluation
▪ What’s good about direct evaluation?
▪ It’s easy to understand
▪ It doesn’t require any knowledge of T, R
▪ It eventually computes the correct average values,
using just sample transitions
▪ What bad about it?
▪ It wastes information about state connections
▪ Each state must be learned separately
▪ So, it takes a long time to learn
Output Values
A
B C D
E
+8 +4 +10
-10
-2
If B and E both go to C
under this policy, how can
their values be different?
Why Not Use Policy Evaluation?
▪ Simplified Bellman updates calculate V for a fixed policy:
▪ Each round, replace V with a one-step-look-ahead layer over V
▪ This approach fully exploited the connections between the states
▪ Unfortunately, we need T and R to do it!
▪ Key question: how can we do this update to V without knowing T and R?
▪ In other words, how to we take a weighted average without knowing the weights?
(s)
s
s, (s)
s, (s),s’
s’
Sample-Based Policy Evaluation?
▪ We want to improve our estimate of V by computing these averages:
▪ Idea: Take samples of outcomes s’ (by doing the action!) and average
(s)
s
s, (s)
s1's2' s3'
s, (s),s’
'
Almost! But we can’t
rewind time to get sample
after sample from state s.
Example: Learning to Walk
Initial A Learning Trial After Learning [1K Trials]
[Kohl and Stone, ICRA 2004]
Example: Learning to Walk
Initial
[Video: AIBO WALK – initial]
[Kohl and Stone, ICRA 2004]
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/AIBO-WALK-initial.mpeg
Example: Learning to Walk
Training
[Video: AIBO WALK – training]
[Kohl and Stone, ICRA 2004]
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/AIBO-WALK-training-1.mpeg
Example: Learning to Walk
Finished
[Video: AIBO WALK – finished]
[Kohl and Stone, ICRA 2004]
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/AIBO-WALK-finished.mpeg
Example: Sidewinding
[Andrew Ng]
[Video: SNAKE – climbStep+sidewinding]
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/SNAKE-climbStep-and-sidewinding.mov
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/SNAKE-climbStep-and-sidewinding.mov
The Crawler!
[Demo: Crawler Bot (L10D1)] [You, in Project 3]
Video of Demo Crawler Bot
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/CrawlerBot.mp4
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/CrawlerBot.mp4
Temporal Difference Learning
Temporal Difference Learning
▪ Big idea: learn from every experience!
▪ Update V(s) each time we experience a transition (s, a, s’, r)
▪ Likely outcomes s’ will contribute updates more often
▪ Temporal difference learning of values
▪ Policy still fixed, still doing evaluation!
▪ Move values toward value of whatever successor occurs: running average
(s)
s
s, (s)
s’
Sample of V(s):
Update to V(s):
Same update:
Exponential Moving Average
▪ Exponential moving average
▪ The running interpolation update:
▪ Makes recent samples more important:
▪ Forgets about the past (distant past values were wrong anyway)
▪ Decreasing learning rate (alpha) can give converging averages
Example: Temporal Difference Learning
Assume:  = 1, α = 1/2
Observed Transitions
B, east, C, -2
0
0 0 8
0
0
-1 0 8
0
0
-1 3 8
0
C, east, D, -2
A
B C D
E
States
Problems with TD Value Learning
▪ TD value leaning is a model-free way to do policy evaluation, mimicking
Bellman updates with running sample averages
▪ However, if we want to turn values into a (new) policy, we’re sunk:
▪ Idea: learn Q-values, not values
▪ Makes action selection model-free too!
a
s
s, a
s,a,s’
s’
Active Reinforcement Learning
Active Reinforcement Learning
▪ Full reinforcement learning: optimal policies (like value iteration)
▪ You don’t know the transitions T(s,a,s’)
▪ You don’t know the rewards R(s,a,s’)
▪ You choose the actions now
▪ Goal: learn the optimal policy / values
▪ In this case:
▪ Learner makes choices!
▪ Fundamental tradeoff: exploration vs. exploitation
▪ This is NOT offline planning! You actually take actions in the world and
find out what happens…
Detour: Q-Value Iteration
▪ Value iteration: find successive (depth-limited) values
▪ Start with V0(s) = 0, which we know is right
▪ Given Vk, calculate the depth k+1 values for all states:
▪ But Q-values are more useful, so compute them instead
▪ Start with Q0(s,a) = 0, which we know is right
▪ Given Qk, calculate the depth k+1 q-values for all q-states:
Q-Learning
▪ Q-Learning: sample-based Q-value iteration
▪ Learn Q(s,a) values as you go
▪ Receive a sample (s,a,s’,r)
▪ Consider your old estimate:
▪ Consider your new sample estimate:
▪ Incorporate the new estimate into a running average:
[Demo: Q-learning – gridworld (L10D2)]
[Demo: Q-learning – crawler (L10D3)]
Video of Demo Q-Learning -- Gridworld
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/Q-learning--gridworld.mp4
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/11/Q-learning--gridworld.mp4
Q-Learning Properties
▪ Amazing result: Q-learning converges to optimal policy -- even
if you’re acting suboptimally!
▪ This is called off-policy learning
▪ Caveats:
▪ You have to explore enough
▪ You have to eventually make the learning rate
small enough
▪ … but not decrease it too quickly
▪ Basically, in the limit, it doesn’t matter how you select actions (!)
The Story So Far: MDPs and RL
Known MDP: Offline Solution
Goal Technique
Compute V*, Q*, * Value / policy iteration
Evaluate a fixed policy  Policy evaluation
Unknown MDP: Model-Based Unknown MDP: Model-Free
Goal Technique
Compute V*, Q*, * VI/PI on approx. MDP
Evaluate a fixed policy  PE on approx. MDP
Goal Technique
Compute V*, Q*, * Q-learning
Evaluate a fixed policy  Value Learning
Next Class: Exploration vs. Exploitation

学霸联盟
essay、essay代写