Website snapshot

This commit is contained in:
Brandon Rozek 2020-01-15 21:51:49 -05:00
parent ee0ab66d73
commit 50ec3688a5
281 changed files with 21066 additions and 0 deletions

View file

@ -0,0 +1,62 @@
# Backtracking
This algorithm tries to construct a solution to a problem one piece at a time. Whenever the algorithm needs to decide between multiple alternatives to the part of the solution it *recursively* evaluates every option and chooses the best one.
## How to Win
To beat any *non-random perfect information* game you can define a Backtracking algorithm that only needs to know the following.
- A game state is good if either the current player has already won or if the current player can move to a bad state for the opposing player.
- A game state is bad if either the current player has already lost or if every available move leads to a good state for the opposing player.
```
PlayAnyGame(X, player)
if player has already won in state X
return GOOD
if player has lost in state X
return BAD
for all legal moves X -> Y
if PlayAnyGame(y, other player) = Bad
return GOOD
return BAD
```
In practice, most problems have an enormous number of states not making it possible to traverse the entire game tree.
## Subset Sum
For a given set, can you find a subset that sums to a certain value?
```
SubsetSum(X, T):
if T = 0
return True
else if T < 0 or X is empty
return False
else
x = any element of X
with = SubsetSum(X \ {x}, T - x)
without = SubsetSum(X \ {x}, T)
return (with or without)
```
X \ {x} denotes set subtraction. It means X without x.
```
ConstructSubset(X, i, T):
if T = 0
return empty set
if T < 0 or n = 0
return None
Y = ConstructSubset(X, i - 1, T)
if Y does not equal None
return Y
Y = ConstructSubset(X, i - 1, T - X[i])
if Y does not equal None
return Y with X[i]
return None
```
## Big Idea
Backtracking algorithms are used to make a *sequence of decisions*.
When we design a new recursive backtracking algorithm, we must figure out in advance what information we will need about past decisions in the middle of the algorithm.

View file

@ -0,0 +1,82 @@
# Dynamic Programming
The book first goes into talking about the complexity of the Fibonacci algorithm
```
RecFibo(n):
if n = 0
return 0
else if n = 1
return 1
else
return RecFibo(n - 1) + RecFibo(n - 2)
```
It talks about how the complexity of this is exponential.
"A single call to `RecFibo(n)` results in one recursive call to `RecFibo(n1)`, two recursive calls to `RecFibo(n2)`, three recursive calls to `RecFibo(n3)`, five recursive calls to `RecFibo(n4)`"
Now consider the memoized version of this algorithm...
```
MemFibo(n):
if n = 0
return 0
else if n = 1
return 1
else
if F[n] is undefined:
F[n] <- MemFibo(n - 1) + MemFibo(n - 2)
return F[n]
```
This actually makes the algorithm run in linear time!
![1564017052666](/home/rozek/Documents/StudyGroup/Algorithms/1564017052666.png)
Dynamic programming makes use of this fact and just intentionally fills up an array with the values of $F$.
```
IterFibo(n):
F[0] <- 0
F[1] <- 1
for i <- 2 to n
F[i] <- F[i - 1] + F[i - 2]
return F[n]
```
Here the linear complexity becomes super apparent!
<u>Interesting snippet</u>
"We had a very interesting gentleman in Washington named Wilson. He was secretary of Defense, and he actually had a pathological fear and hatred of the word *research*. Im not using the term lightly; Im using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term *research* in his presence. You can imagine how he felt, then, about the term *mathematical*.... I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose?"
Dynamic programming is essentially smarter recursion. It's about not repeating the same work.
These algorithms are best developed in two distinct stages.
(1) Formulate the problem recursively.
(a) Specification: What problem are you trying to solve?
(b) Solution: Why is the whole problem in terms of answers t smaller instances exactly the same problem?
(2) Find solutions to your recurrence from the bottom up
(a) Identify the subproblems
(b) Choose a memoization data structure
(c) Identify dependencies
(d) Find a good evaluation order
(e) Analyze space and running time
(f) Write down the algorithm
## Greedy Algorithms
If we're lucky we can just make decisions directly instead of solving any recursive subproblems. The problem is that greedly algorithms almost never work.

View file

@ -0,0 +1,36 @@
# Greedy Algorithms
Greedy Algorithms are about making the best local choice and then blindly plowing ahead.
An example of this is documents on a tape. Let's say to read any given document on a tape, you first need to read all the documents that come before it.
What is the best way to have the documents laid out in order to decrease the expected read time?
Answer: Putting the smaller documents first.
The book then generalizes it further by adding access frequencies. The answer to that is to sort by increasing length first and then decreasing access times.
## Scheduling Classes
To maximize the number of classes one can take one greedy algorithm is to always select the class that ends first.
That will produce one maximal conflict free schedule.
## Structure of Correctness Proofs for Greedy Algorithms
- Assume that there is an optimal solution that is different from the greedy solution.
- Find the first "difference" between the two solutions.
- Argue that we can exchange the optimal choice for the greedy choice without making the solution worse.
## Hospital-Doctor Matching
Let's say you have a list of doctors that need internships and hospitals accepting interns. We need to make it so that every doctor has a job and every hospital has a doctor.
Assuming we have the same number of doctors and hospitals and that each hospital offers only one internship, how do we make an algorithm to ensure that no unstable matchings exist?
An unstable match is when
- a is matched with some other hospital A even though she prefers B
- B is matched with some doctor b even though they prefer a.
The Gale-Shapley algorithm is a great greedy fit. It goes like this
1. An arbitrary unmatched hospital A offers its position to the best doctor a who has not already rejected it.
2. If a is unmatched, she tentatively accepts A's offer. If a already had a match but prefers A, she rejects her current match and tentatively accepts the new offer from A. Otherwise a rejects the new offer.

View file

@ -0,0 +1,86 @@
# Recursion
## Reductions
Quote: *Reduction* is the single most common technique used in designing algorithms. Reduce one problem $X$ to another problem $Y$.
The running time of the algorithm can be affected by $Y$, but $Y$ does not affect the correctness of the algorithm. So it is often useful to not be concerned with the inner workings of $Y$.
## Simplify and Delegate
Quote: *Recursion* is a particularly powerful kind of reduction, which can be described loosely as follows:
- If the given instance of the problem can be solved directly, then do so.
- Otherwise, reduce it to one or more **simpler instances of the same problem.**
The book likes to call the delegation of simpler tasks "giving it to the recursion fairy."
Your own task as an algorithm writer is to simplify the original problem and solve base cases. The recursion fairy will handle the rest.
Tying this to mathematics, this is known as the **Induction Hypothesis**.
The only caveat is that simplifying the tasks must eventually lead to the **base case** otherwise the algorithm might run infinitely!
#### Example: Tower of Hanoi
Assuming you know how the game works, we will describe how to solve it.
Quote: We can't move it all at the beginning, because all the other disks are in the way. So first we have to move those $n - 1$ smaller disks to the spare peg. Once that's done we can move the largest disk directly to its destination. Finally to finish the puzzle, we have to move the $n -1$ disks from the spare peg to the destination.
**That's it.**
Since the problem was reduced to a base case and a $(n - 1)$ problem, we're good. The book has a funny quote "Our job is finished. If we didn't trust the junior monks, we wouldn't have hired them; let them do their job in peace."
```
Hanoi(n, src, dst, tmp):
if n > 0
Hanoi(n - 1, src, tmp, dst)
move disk n from src to dst
Hanoi(n - 1, tmp, dst, src)
```
## Sorting Algorithms
## Merge Sort
```
MergeSort(A[1..n]):
if n > 1
m = floor(n / 2)
MergeSort(A[1..m])
MergeSort(A[m + 1..n])
Merge(A[1..n], m)
```
```
Merge(A[1..n], m):
i = 1
j = m + 1
for k = 1 to n
if j > n
B[k] = A[i]
i++
else if i > m
B[k] = A[j]
j++
else if A[i] < A[j]
B[k] = A[i]
i++
else
B[k] = A[j]
j++
Copy B to A
```
I think an important part to recall here is that the algorithm will break it down to the lowest level. An array with one element and slowly work its way up.
That means we can always assume that each subarray that we're merging is already sorted! Which is why the merge algorithm is written the way it is.
## The Pattern
This section is quoted verbatim.
1. **Divide** the given instance of the problem into several *independent smaller* instances of *exactly* the same problem.
2. **Delegate** each smaller instance to the Recursion Fairy.
3. **Combine** the solutions for the smaller instances into the final solution for the given instance.