mirror of
https://github.com/Brandon-Rozek/website.git
synced 2024-11-23 00:26:30 -05:00
147 lines
8.2 KiB
HTML
147 lines
8.2 KiB
HTML
|
<!DOCTYPE html>
|
|||
|
<html>
|
|||
|
<head>
|
|||
|
<meta charset="utf-8" />
|
|||
|
<meta name="author" content="Fredrik Danielsson, http://lostkeys.se">
|
|||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|||
|
<meta name="robots" content="noindex" />
|
|||
|
<title>Brandon Rozek</title>
|
|||
|
<link rel="stylesheet" href="themes/bitsandpieces/styles/main.css" type="text/css" />
|
|||
|
<link rel="stylesheet" href="themes/bitsandpieces/styles/highlightjs-github.css" type="text/css" />
|
|||
|
</head>
|
|||
|
<body>
|
|||
|
|
|||
|
<aside class="main-nav">
|
|||
|
<nav>
|
|||
|
<ul>
|
|||
|
<li class="menuitem ">
|
|||
|
<a href="index.html%3Findex.html" data-shortcut="">
|
|||
|
Home
|
|||
|
</a>
|
|||
|
</li>
|
|||
|
<li class="menuitem ">
|
|||
|
<a href="index.html%3Fcourses.html" data-shortcut="">
|
|||
|
Courses
|
|||
|
</a>
|
|||
|
</li>
|
|||
|
<li class="menuitem ">
|
|||
|
<a href="index.html%3Flabaide.html" data-shortcut="">
|
|||
|
Lab Aide
|
|||
|
</a>
|
|||
|
</li>
|
|||
|
<li class="menuitem ">
|
|||
|
<a href="index.html%3Fpresentations.html" data-shortcut="">
|
|||
|
Presentations
|
|||
|
</a>
|
|||
|
</li>
|
|||
|
<li class="menuitem ">
|
|||
|
<a href="index.html%3Fresearch.html" data-shortcut="">
|
|||
|
Research
|
|||
|
</a>
|
|||
|
</li>
|
|||
|
<li class="menuitem ">
|
|||
|
<a href="index.html%3Ftranscript.html" data-shortcut="">
|
|||
|
Transcript
|
|||
|
</a>
|
|||
|
</li>
|
|||
|
</ul>
|
|||
|
</nav>
|
|||
|
</aside>
|
|||
|
<main class="main-content">
|
|||
|
<article class="article">
|
|||
|
<h1>Chapter 5: Monte Carlo Methods</h1>
|
|||
|
<p>Monte Carlo methods do not assume complete knowledge of the environment. They require only <em>experience</em> which is a sample sequence of states, actions, and rewards from actual or simulated interaction with an environment. </p>
|
|||
|
<p>Monte Carlo methods are ways of solving the reinforcement learning problem based on averaging sample returns. To ensure that well-defined returns are available, we define Monte Carlo methods only for episodic tasks. Only on the completion of an episode are value estimates and policies changed. </p>
|
|||
|
<p>Monte Carlo methods sample and average returns for each state-action pair is much like the bandit methods explored earlier. The main difference is that there are now multiple states, each acting like a different bandit problems and the problems are interrelated. Due to all the action selections undergoing learning, the problem becomes nonstationary from the point of view of the earlier state.</p>
|
|||
|
<h2>Monte Carlo Prediction</h2>
|
|||
|
<p>Recall that the value of a state is the expected return -- expected cumulative future discounted reward - starting from that state. One way to do it is to estimate it from experience by averaging the returns observed after visits to that state.</p>
|
|||
|
<p>Each occurrence of state $s$ in an episode is called a <em>visit</em> to $s$. The <em>first-visit MC method</em> estimates $v_\pi(s)$ as the average of the returns following first visits to $s$, whereas the <em>every-visit MC method</em> averages the returns following all visits to $s$. These two Monte Carlo methods are very similar but have slightly different theoretical properties. </p>
|
|||
|
<p><u>First-visit MC prediction</u></p>
|
|||
|
<pre><code>Initialize:
|
|||
|
π ← policy to be evaluated
|
|||
|
V ← an arbitrary state-value function
|
|||
|
Returns(s) ← an empty list, for all s ∈ S
|
|||
|
|
|||
|
Repeat forever:
|
|||
|
Generate an episode using π
|
|||
|
For each state s appearing in the episode:
|
|||
|
G ← the return that follows the first occurrence of
|
|||
|
s
|
|||
|
Append G to Returns(s)
|
|||
|
V(s) ← average(Returns(s))</code></pre>
|
|||
|
<h2>Monte Carlo Estimation of Action Values</h2>
|
|||
|
<p>If a model is not available then it is particularly useful to estimate <em>action</em> values rather than state values. With a model, state values alone are sufficient to define a policy. Without a model, however, state values alone are not sufficient. One must explicitly estimate the value of each action in order for the values to be useful in suggesting a policy. </p>
|
|||
|
<p>The only complication is that many state-action pairs may never be visited. If $\pi$ is a deterministic policy, then in following $\pi$ one will observe returns only for one of the actions from each state. With no returns to average, the Monte Carlo estimates of the other actions will not improve with experience. This is a serious problem because the purpose of learning action values is to help in choosing among the actions available in each state. </p>
|
|||
|
<p>This is the general problem of <em>maintaining exploration</em>. For policy evaluation to work for action values, we must assure continual exploration. One way to do this is by specifying that the episodes <em>start in a state-action pair</em>, and that each pair has a nonzero probability of being selected as the start. We call this the assumption of <em>exploring starts</em>.</p>
|
|||
|
<h2>Monte Carlo Control</h2>
|
|||
|
<p>We made two unlikely assumptions above in order to easily obtain this guarantee of convergence for the Monte Carlo method. One was that the episodes have exploring starts, and the other was that policy evaluation could be done with an infinite number of episodes. </p>
|
|||
|
<p><u>Monte Carlo Exploring Starts</u></p>
|
|||
|
<pre><code>Initialize, for all s ∈ S, a ∈ A(s):
|
|||
|
Q(s,a) ← arbitrary
|
|||
|
π(s) ← arbitrary
|
|||
|
Returns(s,a) ← empty list
|
|||
|
|
|||
|
Repeat forever:
|
|||
|
Choose S_0 ∈ S and A_0 ∈ A(S_0) s.t. all pairs have probability > 0
|
|||
|
Generate an episode starting from S_0,A_0, following
|
|||
|
π
|
|||
|
For each pair s,a appearing in the episode:
|
|||
|
G ← the return that follows the first occurrence of
|
|||
|
s,a
|
|||
|
Append G to Returns(s,a)
|
|||
|
Q(s,a) ← average(Returns(s,a))
|
|||
|
For each s in the episode:
|
|||
|
π(s) ← arg max_a Q(s,a)</code></pre>
|
|||
|
<h2>Monte Carlo Control without Exploring Starts</h2>
|
|||
|
<p>The only general way to ensure that actions are selected infinitely often is for the agent to continue to select them. There are two approaches ensuring this, resulting in what we call <em>on-policy</em> methods and <em>off-policy</em> methods. </p>
|
|||
|
<p>On-policy methods attempt to evaluate or improve the policy that is used to make decisions, whereas off-policy methods evaluate or improve a policy different from that used to generate the data.</p>
|
|||
|
<p>In on-policy control methods the policy is generally <em>soft</em>, meaning that $\pi(a|s)$ for all $a \in \mathcal{A}(s)$. The on-policy methods in this section uses $\epsilon$-greedy policies, meaning that most of the time they choose an action that has maximal estimated action value, but with probability $\epsilon$ they instead select an action at random. </p>
|
|||
|
<p><u>On-policy first-visit MC control (for $\epsilon$-soft policies)</u></p>
|
|||
|
<pre><code></code></pre>
|
|||
|
<p>Initialize, for all $s \in \mathcal{S}$, $a \in \mathcal{A}(s)$:</p>
|
|||
|
<p>$Q(s, a)$ ← arbitrary</p>
|
|||
|
<p>$Returns(s,a)$ ← empty list</p>
|
|||
|
<p>$\pi(a|s)$ ← an arbitrary $\epsilon$-soft policy</p>
|
|||
|
<p>Repeat forever:</p>
|
|||
|
<p>(a) Generate an episode using $\pi$</p>
|
|||
|
<p>(b) For each pair $s,a$ appearing in the episoe</p>
|
|||
|
<p> $G$ ← the return that follows the first occurance of s, a</p>
|
|||
|
<p> Append $G$ to $Returns(s,a)$</p>
|
|||
|
<p> $Q(s, a)$ ← average($Returns(s,a)$)</p>
|
|||
|
<p>(c) For each $s$ in the episode:</p>
|
|||
|
<p> $A^*$ ← argmax$_a Q(s,a)$ (with ties broken arbitrarily)</p>
|
|||
|
<p> For all $a \in \mathcal{A}(s)$:</p>
|
|||
|
<p> $\pi(a|s)$ ← $\begin{cases} 1 - \epsilon + \epsilon / |\mathcal{A}(s)| & a = A^<em> \ \epsilon / | \mathcal{A}(s)| & a \neq A^</em> \end{cases}$</p>
|
|||
|
<pre><code></code></pre>
|
|||
|
</article>
|
|||
|
</main>
|
|||
|
|
|||
|
<script src="themes/bitsandpieces/scripts/highlight.js"></script>
|
|||
|
<script src="themes/bitsandpieces/scripts/mousetrap.min.js"></script>
|
|||
|
<script type="text/x-mathjax-config">
|
|||
|
MathJax.Hub.Config({
|
|||
|
tex2jax: {
|
|||
|
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
|
|||
|
processEscapes: true
|
|||
|
}
|
|||
|
});
|
|||
|
</script>
|
|||
|
|
|||
|
<script type="text/javascript"
|
|||
|
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
|
|||
|
</script>
|
|||
|
<script>
|
|||
|
hljs.initHighlightingOnLoad();
|
|||
|
|
|||
|
document.querySelectorAll('.menuitem a').forEach(function(el) {
|
|||
|
if (el.getAttribute('data-shortcut').length > 0) {
|
|||
|
Mousetrap.bind(el.getAttribute('data-shortcut'), function() {
|
|||
|
location.assign(el.getAttribute('href'));
|
|||
|
});
|
|||
|
}
|
|||
|
});
|
|||
|
</script>
|
|||
|
|
|||
|
</body>
|
|||
|
</html>
|