website/static/~brozek/index.html?research%2FReinforcementLearning.html

102 lines
4.2 KiB
HTML
Raw Normal View History

2020-01-15 23:07:02 -05:00
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="author" content="Fredrik Danielsson, http://lostkeys.se">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="robots" content="noindex" />
<title>Brandon Rozek</title>
<link rel="stylesheet" href="themes/bitsandpieces/styles/main.css" type="text/css" />
<link rel="stylesheet" href="themes/bitsandpieces/styles/highlightjs-github.css" type="text/css" />
</head>
<body>
<aside class="main-nav">
<nav>
<ul>
<li class="menuitem ">
<a href="index.html%3Findex.html" data-shortcut="">
Home
</a>
</li>
<li class="menuitem ">
<a href="index.html%3Fcourses.html" data-shortcut="">
Courses
</a>
</li>
<li class="menuitem ">
<a href="index.html%3Flabaide.html" data-shortcut="">
Lab Aide
</a>
</li>
<li class="menuitem ">
<a href="index.html%3Fpresentations.html" data-shortcut="">
Presentations
</a>
</li>
<li class="menuitem ">
<a href="index.html%3Fresearch.html" data-shortcut="">
Research
</a>
</li>
<li class="menuitem ">
<a href="index.html%3Ftranscript.html" data-shortcut="">
Transcript
</a>
</li>
</ul>
</nav>
</aside>
<main class="main-content">
<article class="article">
<h1>Reinforcement Learning</h1>
<p>Reinforcement learning is the art of analyzing situations and mapping them to actions in order to maximize a numerical reward signal.</p>
<p>In this independent study, I as well as Dr. Stephen Davies, will explore the Reinforcement Learning problem and its subproblems. We will go over the bandit problem, markov decision processes, and discover how best to translate a problem in order to <strong>make decisions</strong>.</p>
<p>I have provided a list of topics that I wish to explore in a <a href="index.html%3Fresearch%252FReinforcementLearning%252Fsyllabus.html">syllabus</a></p>
<h2>Readings</h2>
<p>In order to spend more time learning, I decided to follow a textbook this time. </p>
<p>Reinforcement Learning: An Introduction</p>
<p>By Richard S. Sutton and Andrew G. Barto</p>
<p><a href="index.html%3Fresearch%252FReinforcementLearning%252Freadings.html">Reading Schedule</a> </p>
<h2>Notes</h2>
<p>The notes for this course, is going to be an extreemly summarized version of the textbook. There will also be notes on whatever side tangents Dr. Davies and I explore.</p>
<p><a href="index.html%3Fresearch%252FReinforcementLearning%252Fnotes.html">Notes page</a></p>
<p>I wrote a small little quirky/funny report describing the bandit problem. Great for learning about the common considerations for Reinforcement Learning problems.</p>
<p><a href="/files/research/TheBanditReport.pdf">The Bandit Report</a></p>
<h2>Code</h2>
<p>Code will occasionally be written to solidify the learning material and to act as aids for more exploration. </p>
<p><a href="https://github.com/brandon-rozek/ReinforcementLearning">Github Link</a></p>
<p>Specifically, if you want to see agents I've created to solve some OpenAI environments, take a look at this specific folder in the Github Repository</p>
<p><a href="https://github.com/Brandon-Rozek/ReinforcementLearning/tree/master/agents">Github Link</a></p>
</article>
</main>
<script src="themes/bitsandpieces/scripts/highlight.js"></script>
<script src="themes/bitsandpieces/scripts/mousetrap.min.js"></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
processEscapes: true
}
});
</script>
<script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<script>
hljs.initHighlightingOnLoad();
document.querySelectorAll('.menuitem a').forEach(function(el) {
if (el.getAttribute('data-shortcut').length > 0) {
Mousetrap.bind(el.getAttribute('data-shortcut'), function() {
location.assign(el.getAttribute('href'));
});
}
});
</script>
</body>
</html>