mirror of
https://github.com/Brandon-Rozek/website.git
synced 2024-11-22 00:06:29 -05:00
Research page updates
This commit is contained in:
parent
eb02764efb
commit
0e5f69e827
1 changed files with 19 additions and 4 deletions
|
@ -13,10 +13,25 @@ design and implement artificial intelligent agents using computational logic. I'
|
|||
- Explainability through verifiable chains of inference
|
||||
- Defeasible reasoning under uncertainty
|
||||
- Reasoning about agents and their cognitive states
|
||||
- Automated planning under ethical constraints
|
||||
|
||||
[Notes on Automated Theorem Proving](atp)
|
||||
|
||||
## Integrated Planning and Reinforcement Learning
|
||||
Working with [Junkyu Lee](https://researcher.ibm.com/researcher/view.php?person=ibm-Junkyu.Lee),
|
||||
[Michael Katz](https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Michael.Katz1),
|
||||
and [Shirin Sohrabi](https://researcher.watson.ibm.com/researcher/view.php?person=us-ssohrab)
|
||||
on extending and relaxing assumptions within their existing
|
||||
[Planning Annotated Reinforcement Learning Framework](https://prl-theworkshop.github.io/prl2021/papers/PRL2021_paper_36.pdf) developed at IBM Research.
|
||||
|
||||
|
||||
In this framework, automated planning is used on a higher-level version of the overall
|
||||
problem with a surjective function mapping RL states to AP states. The agent is
|
||||
based on the options framework in Hiearchical Reinforcement Learning where options
|
||||
are defined as the grounded actions in the planning model.
|
||||
|
||||
|
||||
More to come...
|
||||
|
||||
## Symbolic Methods for Cryptography
|
||||
Working with [Dr. Andrew Marshall](https://www.marshallandrew.net/) and others in applying term reasoning within computational logic
|
||||
towards cryptography. This collaboration was previously funded under an ONR grant. We are interested in applying techniques such
|
||||
|
|
Loading…
Reference in a new issue