DCC 318 -无代写
时间:2025-03-14
Announcements
▪ Feb 18: No class (Monday Schedule)
▪ Feb 21: Recap for Exam 1
▪ Feb 25: In-class Exam 1
▪ 1 hour long.
▪ Please be present in DCC 318 at 10 AM sharp.
▪ 3 questions from:
▪ Search, Constraint Satisfaction Problems, Logic
▪ Feb 28: We will resume regular lectures with Introduction to Probability
and Bayes Nets.
Reinforcement Learning:
Exploration vs. Exploitation
CSCI 4150: Introduction to Artificial Intelligence (Spring 2025)
Oshani Seneviratne
Assistant Professor in Computer Science
senevo@rpi.edu
February 14, 2025
Acknowledgment:
Most of the content in these slides was adapted from the slides created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
All CS188 materials are available at http://ai.berkeley.edu.
The Story So Far: MDPs and RL
Known MDP: Offline Solution
Goal Technique
Compute V*, Q*, * Value / policy iteration
Evaluate a fixed policy  Policy evaluation
Unknown MDP: Model-Based Unknown MDP: Model-Free
Goal Technique
Compute V*, Q*, * VI/PI on approx. MDP
Evaluate a fixed policy  PE on approx. MDP
Goal Technique
Compute V*, Q*, * Q-learning
Evaluate a fixed policy  Value Learning
Model-Free Learning
▪ Model-free (temporal difference) learning
▪ Experience world through episodes
▪ Update estimates each transition
▪ Over time, updates will mimic Bellman updates
r
a
s
s, a
s’
a’
s’, a’
s’’
Q-Learning
▪ We’d like to do Q-value updates to each Q-state:
▪ But can’t compute this update without knowing T, R
▪ Instead, compute average as we go
▪ Receive a sample transition (s,a,r,s’)
▪ This sample suggests
▪ But we want to average over results from (s,a) (Why?)
▪ So keep a running average
Q-Learning Properties
▪ Amazing result: Q-learning converges to optimal policy -- even if you’re
acting suboptimally!
▪ This is called off-policy learning
▪ Caveats:
▪ You have to explore enough
▪ You have to eventually make the learning rate small enough
▪ … but not decrease it too quickly (otherwise , you would have averaged only a small
amount of experience together)
▪ Basically, in the limit, it doesn’t matter how you select actions.
Video of Demo Q-Learning Auto Cliff Grid
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Qlearning--auto--cliff-grid.mp4
How to Explore?
▪ Several schemes for forcing exploration
▪ Simplest: random actions (-greedy)
▪ Every time step, flip a coin
▪ With (small) probability , act randomly
▪ With (large) probability 1-, act on current policy
▪ Problems with random actions?
▪ You do eventually explore the space, but keep
thrashing around once learning is done
▪ One solution: lower  over time
▪ Another solution: exploration functions
Video of Demo Q-learning – Manual Exploration – Bridge Grid
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Qlearning--manual exploration--bridge-grid.mp4
Video of Demo Q-learning – Epsilon-Greedy – Crawler
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Qlearning--epsilon greedy--crawler.mp4
80% of the
time, the bot
takes a
random
action,
leading to
high
exploration.
Lowering
epsilon
(reducing
randomness)
allows the bot to
exploit learned
knowledge.
Exploration Functions
▪ When to explore?
▪ Random actions: explore a fixed amount
▪ Better idea: explore areas whose badness is not
(yet) established, eventually stop exploring
▪ Exploration function
▪ Takes a value estimate u and a visit count n, and
returns an optimistic utility, e.g.
▪ Note: this propagates the “bonus” back to states that lead to unknown states as well!

Modified Q-Update:
Regular Q-Update:
Video of Demo Q-learning – Exploration Function – Crawler
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Qlearning--epsilon greedy--crawler.mp4
Regret
▪ Even if you learn the optimal policy, you
still make mistakes along the way!
▪ Regret is a measure of your total mistake
cost: the difference between your
(expected) rewards, including
suboptimality, and optimal (expected)
rewards
▪ Minimizing regret goes beyond learning to
be optimal – it requires efficiently learning
to be optimal.
▪ Example: random exploration and
exploration functions both end up
optimal, but random exploration has a
higher regret.
Learning from past errors helps the agent avoid
repeating them.
Approximate Q-Learning
Generalizing Across States
▪ Basic Q-Learning keeps a table of all q-values
▪ In realistic situations, we cannot possibly learn
about every single state!
▪ Too many states to visit them all in training
▪ Too many states to hold the q-tables in memory
▪ Instead, we want to generalize:
▪ Learn about some small number of training states from
experience
▪ Generalize that experience to new, similar situations
▪ This is a fundamental idea in machine learning, and we’ll
see it over and over again
Example: Pacman
Let’s say we discover
through experience
that this state is bad:
In naïve q-learning,
we know nothing
about this state:
Or even this one!
Spot the difference?
There’s a food pellet missing
here.
Video of Demo Q-Learning Pacman – Tiny – Watch All
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Qlearning--pacman--tiny--watch-all.mp4
Video of Demo Q-Learning Pacman – Tiny – Silently Train
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Qlearning--pacman--tiny--silent-train.mp4
Video of Demo Q-Learning Pacman – Tricky – Watch All
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Qlearning--pacman--tricky--watch-all.mp4
Feature-Based Representations
▪ Solution: describe a state using a vector of
features (properties)
▪ Features are functions from states to real numbers
(often 0/1) that capture important properties of the
state
▪ Example features:
▪ Distance to closest ghost
▪ Distance to closest dot
▪ Number of ghosts
▪ 1 / (dist to dot)2
▪ Is Pacman in a tunnel? (0/1)
▪ …… etc.
▪ Is it the exact state on this slide?
▪ Can also describe a q-state (s, a) with features (e.g.
action moves closer to food)
Linear Value Functions
▪ Using a feature representation, we can write a q function (or value function) for any
state using a few weights:
▪ Advantage: our experience is summed up in a few powerful numbers
▪ Disadvantage: states may share features but actually be very different in value!
Approximate Q-Learning
▪ Q-learning with linear Q-functions:
▪ Intuitive interpretation:
▪ Adjust weights of active features
▪ If something unexpectedly bad happens, blame the features that were on that
state, and dis-prefer all states with that state’s features
▪ Formal justification: online least squares
Exact Q’s
Approximate Q’s
Current Sample Approximation
Features
that describe
the state-
action pair
Example: Q-Pacman
Video of Demo Approximate Q-Learning -- Pacman
Video: http://cs.rpi.edu/academics/courses/spring25/csci4150/website/lectures/videos/12/Approximate-Qlearning--pacman.mp4
Q-Learning and Least Squares
0 20
0
20
40
0
10
20
30
40
0
10
20
30
20
22
24
26
Linear Approximation: Regression*
Prediction: Prediction:
Optimization: Least Squares*
0 20
0
Error or “residual”
Prediction
Observation
Minimizing Error*
Approximate q update explained:
Imagine we had only one point x, with features f(x), target value y, and weights w:
“target” “prediction”
We step in
the opposite
direction to
minimize the
error.
0 2 4 6 8 10 12 14 16 18 20
-15
-10
-5
0
5
10
15
20
25
30
Degree 15 polynomial
Overfitting: Why Limiting Capacity Can Help*
Policy Search
Policy Search
▪ Problem: often the feature-based policies that work well (win games, maximize
utilities) aren’t the ones that approximate V / Q best
▪ Q-learning’s priority: get Q-values close (modeling)
▪ Action selection priority: get ordering of Q-values right (prediction)
▪ We’ll see this distinction between modeling and prediction again later in the course
▪ Solution: learn policies that maximize rewards, not the values that predict them
▪ Policy search: start with an ok solution (e.g. , Q-learning) , then fine-tune by hill
climbing on feature weights
Policy Search
▪ Simplest policy search:
▪ Start with an initial linear value function or Q-function
▪ Nudge each feature weight up and down and see if your policy is better than before
▪ Problems:
▪ How do we tell the policy got better?
▪ Need to run many sample episodes!
▪ If there are a lot of features, this can be impractical
▪ Better methods exploit lookahead structure, sample wisely, change
multiple parameters…
Example: Reinforcement Learning with Spot
Video: https://www.youtube.com/watch?v=Kf9WDqYKYQQ
Conclusion
▪ We’re done with Part I: Search and Planning!
▪ We’ve seen how AI methods can solve
problems in:
▪ Search
▪ Constraint Satisfaction Problems
▪ Games
▪ Markov Decision Problems
▪ Reinforcement Learning
▪ Next up: Exam 1
▪ And then Part II: Uncertainty and Learning!

学霸联盟
essay、essay代写