website/content/research/_index.md

67 lines
5 KiB
Markdown

---
Title: Research
Description: A list of my research Projects
---
**[Quick List of Publications](publications)**
**Broad Research Interests:** Automated Reasoning, Artificial Intelligence, Formal Methods
## Logic-Based AI
Working with [Dr. Selmer Bringsjord](https://homepages.rpi.edu/~brings/) and others in the [RAIR Lab](https://rair.cogsci.rpi.edu/) to design
and implement artificial intelligent agents whose behavior is verifiable
via chains of inference. More details coming soon...
## Symbolic Methods for Cryptography
Worked with [Dr. Andrew Marshall](https://www.marshallandrew.net/) under an ONR grant in collaboration with University at Albany, Clarkson University, University of Texas at Dallas, and the Naval Research lab in order to automatically generated and verify cryptographic algorithms using symbolic (as opposed to computational) methods.
During that time period I built a free algebra library, rewrite library, parts of the crypto tool, and dabbled in Unification algorithms. You can check them out on [Github](https://github.com/symcollab/CryptoSolve).
Currently, I am an external collaborator who mainly helps maintain the codebase I started as well as contribute to our current work with Garbled Circuits. We presented our work at [UNIF 2020](https://www3.risc.jku.at/publications/download/risc_6129/proceedings-UNIF2020.pdf#page=58) ([slides](/files/research/UNIF2020-Slides.pdf)), FROCOS 2021, and have a couple other papers in the works.
Through my collaborators, I've learned about term reasoning and algebras. [[Notes]](termreasoning)
## Reinforcement Learning
**Deep Reinforcement Learning:** With [Dr. Ron Zacharski](http://zacharski.org/) I focused more on a particular instance of Reinforcement Learning where deep neural networks are used. During this time, I built out a Reinforcement Learning library written in PyTorch. This library helps me have a test bed for trying out different algorithms and attempts to create my own.
One particular problem I'm fascinated by is how to make Reinforcement Learning algoirthms more sample efficient. This means, how can we make it so that it learns more from every observation or make it so that we can achieve our goal quicker?
*Links:*
| | | |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| [RL Library on Github](https://github.com/brandon-rozek/rltorch) | [Interactive Demonstrations Library](https://github.com/brandon-rozek/gyminteract) | [Honors Thesis](/files/research/honorsthesis.pdf) ([Eagle Scholar Entry](https://scholar.umw.edu/student_research/305/)) |
| [Honors Defense](/files/research/ExpeditedLearningInteractiveDemo.pptx) | [QEP Algorithm Slides](/files/research/QEP.pptx) | [More...](deepreinforcementlearning) |
**Reinforcement Learning:** Studied the fundamentals of reinforcement learning with [Dr. Stephen Davies](http://stephendavies.org/). We went over the fundamentals such as value functions, policy functions, how we can describe our environment as a markov decision processes, etc.
[Notes and Other Goodies](reinforcementlearning)
[Github Code](https://github.com/brandon-rozek/ReinforcementLearning)
## Other
[**Programming Languages:**](proglang) Back in the Fall of 2018, under the guidance of Ian Finlayson, I worked towards creating a programming language similar to SLOTH (Simple Language of Tiny Heft). [SLOTH Code](https://github.com/brandon-rozek/SLOTH)
Before this study, I worked through a great book called ["Build your own Lisp"](https://www.buildyourownlisp.com/).
[**Competitive Programming:**](progcomp) Studying algorithms and data structures necessary for competitive programming. Attended ACM ICPC in November 2018/2019 with a team of two other students.
**Cluster Analysis:** The study of grouping similar observations without any prior knowledge. I studied this topic by deep diving Wikipedia articles under the guidance of Dr. Melody Denhere during Spring 2018. **[Extensive notes](clusteranalysis)**
[**Excitation of Rb87**](rb87): Worked in a Quantum Research lab alongside fellow student Hannah Killian under the guidance of Dr. Hai Nguyen. I provided software tools and assisted in understanding the mathematics behind the phenomena.
[Modeling Population Dynamics of Incoherent and Coherent Excitation](/files/research/modellingpopulationdynamics.pdf)
[Coherent Control of Atomic Population Using the Genetic Algorithm](/files/research/coherentcontrolofatomicpopulation.pdf)
[**Beowulf Cluster:**](lunac) In order to circumvent the frustrations I had with simulation code taking a while, I applied and received funding to build out a Beowulf cluster for the Physics department. Dr. Maia Magrakvilidze was the advisor for this project. [LUNA-C Poster](/files/research/LUNACposter.pdf)