mirror of
https://github.com/Brandon-Rozek/website.git
synced 2025-10-09 22:41:13 +00:00
Updated research section on website
This commit is contained in:
parent
91ecc135fa
commit
2489aa163b
7 changed files with 125 additions and 8 deletions
|
@ -5,6 +5,16 @@ Description: A list of my research Projects
|
|||
|
||||
**[Quick List of Publications](publications)**
|
||||
|
||||
**Research Interests:** Automated Reasoning, Artificial Intelligence, Formal Methods
|
||||
|
||||
## Symbolic Methods for Cryptography
|
||||
Worked with Dr. Andrew Marshall under an ONR grant in collaboration with University at Albany, Clarkson University, University of Texas at Dallas, and the Naval Research lab in order to automatically generated and verify cryptographic algorithms using symbolic (as opposed to computational) methods.
|
||||
|
||||
During that time period I built a free algebra library, rewrite library, parts of the crypto tool, and dabbled in Unification algorithms. You can check them out on [Github](https://github.com/symcollab/CryptoSolve).
|
||||
|
||||
Currently, I am an external collaborator who mainly helps maintain the codebase I started as well as contribute to our current work with Garbled Circuits. We presented our work at [UNIF 2020](https://www3.risc.jku.at/publications/download/risc_6129/proceedings-UNIF2020.pdf#page=58) ([slides](/files/research/UNIF2020-Slides.pdf)), an accepted paper at FROCOS 2021, and have a couple other papers in the works.
|
||||
|
||||
Through my collaborators, I've learned about term reasoning and algebras. [[Notes]](termreasoning)
|
||||
## Reinforcement Learning
|
||||
|
||||
**Deep Reinforcement Learning:** With Dr. Ron Zacharski I focused more on a particular instance of Reinforcement Learning where deep neural networks are used. During this time, I built out a Reinforcement Learning library written in PyTorch. This library helps me have a test bed for trying out different algorithms and attempts to create my own.
|
||||
|
@ -27,19 +37,13 @@ One particular problem I'm fascinated by is how to make Reinforcement Learning a
|
|||
[Github Code](https://github.com/brandon-rozek/ReinforcementLearning)
|
||||
|
||||
|
||||
## Symbolic Methods
|
||||
Worked with Dr. Andrew Marshall under an ONR grant in collaboration with University at Albany, Clarkson University, University of Texas at Dallas, and the Naval Research lab in order to automatically generated and verify cryptographic algorithms.
|
||||
|
||||
During that time period I built a free algebra library, rewrite library, parts of the crypto tool, and dabbled in Unification algorithms. You can check them out on [Github](https://github.com/symcollab/CryptoSolve).
|
||||
|
||||
Currently, I am an external collaborator who mainly helps maintain the codebase I started as well as contribute to our current work with Garbled Circuits. We presented our work at [UNIF 2020](https://www3.risc.jku.at/publications/download/risc_6129/proceedings-UNIF2020.pdf#page=58) ([slides](/files/research/UNIF2020-Slides.pdf)), an accepted paper at FROCOS 2021, and have a couple other papers in the works.
|
||||
|
||||
|
||||
## Other
|
||||
|
||||
**Programming Languages:** Studying the design of programming languages. So far I made an implementation of the SLOTH programming language, experimenting with what I want my own programming language to be syntatically and paradigm wise. [SLOTH Code](https://github.com/brandon-rozek/SLOTH)
|
||||
|
||||
Before this study, I worked though a book called "Build your own Lisp" and my implementation of a lisp like language: [Lispy Code](https://github.com/brandon-rozek/lispy)
|
||||
Before this study, I worked though a book called ["Build your own Lisp"](https://www.buildyourownlisp.com/) and my implementation of a lisp like language: [Lispy Code](https://github.com/brandon-rozek/lispy)
|
||||
|
||||
**Competitive Programming:** Studying algorithms and data structures necessary for competitive programming. Attended ACM ICPC in November 2018/2019 with a team of two other students.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue