mirror of
https://github.com/Brandon-Rozek/website.git
synced 2024-10-30 01:12:07 -04:00
25 lines
1.3 KiB
Markdown
25 lines
1.3 KiB
Markdown
|
---
|
||
|
draft: false
|
||
|
title: "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)"
|
||
|
authors: [
|
||
|
"Brandon Rozek",
|
||
|
"Junkyu Lee",
|
||
|
"Harsha Kokel",
|
||
|
"Michael Katz",
|
||
|
"Shirin Sohrabi"
|
||
|
]
|
||
|
date: 2024-03-24
|
||
|
publish_date: "2024/03/24"
|
||
|
conference: "AAAI Conference on Artificial Intelligence"
|
||
|
|
||
|
|
||
|
isbn: ""
|
||
|
doi: "10.1609/aaai.v38i21.30504"
|
||
|
volume: 38
|
||
|
firstpage: 23635
|
||
|
lastpage: 23636
|
||
|
language: "English"
|
||
|
|
||
|
pdf_url: "https://ojs.aaai.org/index.php/AAAI/article/view/30504/32640"
|
||
|
abstract: "Partially observable Markov decision processes (POMDPs) challenge reinforcement learning agents due to incomplete knowledge of the environment. Even assuming monotonicity in uncertainty, it is difficult for an agent to know how and when to stop exploring for a given task. In this abstract, we discuss how to use hierarchical reinforcement learning (HRL) and AI Planning (AIP) to improve exploration when the agent knows possible valuations of unknown predicates and how to discover them. By encoding the uncertainty in an abstract planning model, the agent can derive a high-level plan which is then used to decompose the overall POMDP into a tree of semi-POMDPs for training. We evaluate our agent's performance on the MiniGrid domain and show how guided exploration may improve agent performance."
|
||
|
---
|