Compare commits

...

4 commits

Author SHA1 Message Date
4a0c18652a
New Post 2023-11-09 22:37:45 -05:00
0f2550fbbb
Updating tags 2023-11-09 22:37:31 -05:00
12d80667e8
New Post 2023-11-09 22:14:16 -05:00
7ea6aac5c0
New post 2023-11-09 21:37:39 -05:00
4 changed files with 318 additions and 1 deletions

View file

@ -0,0 +1,151 @@
---
title: "Figuring out which git repositories are ahead or behind"
date: 2023-11-09T21:05:34-05:00
draft: false
tags: ["Git"]
math: false
medium_enabled: false
---
More often than I care to admit, I would pick up to do work on a device only to realize that I'm working with an older version of the codebase. I could use the `git status` command, but the output is verbose and stale if you haven't `git fetch/pull`'d.
I keep the majority of my git repositories in the folder `~/repo/` on all my devices. Inspired by a recent [blog post by Clayton Errington](https://claytonerrington.com/blog/git-status/), I wanted a way to quickly check within a folder which repositories need updating. Their blog post has a script written in PowerShell. I decided to write my own bash implementation, and also ignore the bit about modified files since I mostly care about the state of my commits with respect to the `origin` remote.
Before writing a recursive implementation, let's first discuss how to check the ahead/behind status for a single repository.
First things first, you need to make sure that we have all the references from the remote.
```bash
git remote update
```
To print out how many commits the local `main` branch is ahead of the one located on the `origin` remote we can use:
```bash
git rev-list --count origin/main..main
```
Similarly for checking how many commits the local `main` branch is behind we can use:
```bash
git rev-list --count main..origin/main
```
Instead of looking at the `main` branch, maybe we can to check whichever branch we're currently at.
```bash
branch=$(git rev-parse --abbrev-ref HEAD)
```
We can wrap all of this into a nice bash function. We'll additionally check if there is a `.git` in the current folder as none of the git commands will work without it.
```bash
check_remote() {
if [ -d .git ]; then
git remote update > /dev/null 2> /dev/null
branch=$(git rev-parse --abbrev-ref HEAD)
ahead=$(git rev-list --count origin/$branch..$branch)
behind=$(git rev-list --count $branch..origin/$branch)
echo "$ahead commits ahead, $behind commits behind"
fi
}
```
I currently have 15 repositories in my `~/repo` folder. Now I can `cd` into each of them and run this bash function. Or, I can have `bash` do it for me :)
Let's write a function called `process` that does just that. We'll pass in a folder as an argument stored in `$1`
```bash
process() {
if [ -d "$1/.git" ]; then
pushd "$PWD" > /dev/null
cd "$1"
echo -n "$1 "
check_remote
popd > /dev/null
fi
}
```
The `pushd` command will keep track of the folder that we're currently in. Then we `cd` into the directory that has `.git` folder. Print the name of the folder so we can associate the ahead/behind counts, and then run the `check_remote` function. Lastly we `popd` back to the folder we started from.
All that's left is to get the list of folders to process:
```bash
find . -type d -print0
```
Feed it into a `while read` loop passing in each folder into the `process` function.
```bash
find . -type d -print0 | while read -d $'\0' folder
do
process $folder
done
```
All together the script will look like:
```bash
#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
show_usage() {
echo "Usage: git-remote-status [-R]"
exit 1
}
check_remote() {
if [ -d .git ]; then
git remote update > /dev/null 2> /dev/null
branch=$(git rev-parse --abbrev-ref HEAD)
ahead=$(git rev-list --count origin/$branch..$branch)
behind=$(git rev-list --count $branch..origin/$branch)
echo "$ahead commits ahead, $behind commits behind"
fi
}
if [ "$#" -eq 0 ]; then
check_remote
exit 0
fi
if [ "$1" != "-R" ]; then
show_usage
exit 1
fi
process() {
if [ -d "$1/.git" ]; then
pushd "$PWD" > /dev/null
cd "$1"
echo -n "$1 "
check_remote
popd > /dev/null
fi
}
export -f process
find . -type d -print0 | while read -d $'\0' folder
do
process $folder
done
```
This gives us two options. If we pass in no flags, then it'll print out the ahead/behind status for the current folder. If we pass in `-R`, then we recursively check all the subfolders as well.
Example Output of `git-remote-status -R`:
```
./project1 0 commits ahead, 3 commits behind
./project2 1 commits ahead, 0 commits behind
./project3 1 commits ahead, 2 commits behind
./project4 0 commits ahead, 0 commits behind
./project5 0 commits ahead, 0 commits behind
```

View file

@ -2,7 +2,7 @@
title: "Obtaining a IPv6 Address via Hurricane Electric's Tunnel Broker Service"
date: 2023-10-09T22:31:10-04:00
draft: false
tags: []
tags: ["Networking"]
math: false
medium_enabled: false
---

View file

@ -0,0 +1,40 @@
---
title: "Transcoding Plex Video in RAM"
date: 2023-11-09T21:58:55-05:00
draft: false
tags: []
math: false
medium_enabled: false
---
One not so secret trick I recently learned is that you can have Plex transcode video in RAM to speed it up. This is especially useful if like me, your server runs on spinning rust hard drives and you have many GBs of RAM to spare.
**Step 1:** Set a transcode location in Plex.
Log into your admin account. Then go to Settings -> Transcoder.
From there select the "Show Advanced" button and scroll down to the "Set Transcoder temporary directory".
I've set it to `/transcode` and then hit the Save button.
**Step 2: Setup `tmpfs`**
We can use `tmpfs` to setup a filesystem that will be stored in RAM at the transcode directory. If you have Plex installed not in a container, then you can follow the [Arch Wiki Instructions](https://wiki.archlinux.org/title/Tmpfs) for setting it up.
I use `docker-compose` for my setup, so it involved adding some limes to my yaml.
```yaml
plex:
image: lscr.io/linuxserver/plex:1.32.5
# ...
tmpfs:
- /transcode
# ...
```
I don't promise massive performance increase by setting up Plex this way, but I do think it makes sense:
- Improved write speeds for the transcoded files
- The transcoded files aren't persistent
If you're of a different opinion, let me know.

View file

@ -0,0 +1,126 @@
---
title: "Simple Key-Value Store using Sqlite3"
date: 2023-11-09T22:15:23-05:00
draft: false
tags: ["DB"]
math: false
medium_enabled: false
---
A lot of software nowadays are built for scale. You have to setup a Kubernetes cluster and deploy Redis for duplication in order to have a key-value store. Though for many small projects, I feel like it's overkill.
I'll show in this post, that we can have a nice simple[^1] key-value store using `sqlite3`. This gives us the benefit that we don't need to use system resources to run a daemon the entire time and only spin up a process when we need it.
For our key-value store, we're going to use a table with two columns:
- A key, which we'll call `name`. This will be a unique `TEXT` type that has to be set.
- The value, which we'll call `value` (Creative, I know.) For our purposes, this will also be a `TEXT` type.
The SQL to create this table is
```sql
CREATE TABLE config(
name TEXT NOT NULL UNIQUE,
value TEXT
);
```
Let's say we want to get the value of the key `author`. This is a `SELECT` statement away:
```sql
SELECT value FROM config where name='author';
```
Now let's say that we want to insert a new key into the table.
```sql
INSERT INTO config(name, value) VALUES ('a', '1');
```
What about updating?
```sql
UPDATE config SET value='2' WHERE name='a';
```
The tricky part is if we want to insert if the key does not exist, and update if it does. To handle this we'll need to resolve the [conflict](https://www.sqlite.org/lang_conflict.html).
```sql
INSERT INTO config(name, value) VALUES ('a', '3') ON CONFLICT(name) DO UPDATE SET value=excluded.value;
```
Lastly if you want to export the entire key-value store as a CSV:
```bash
sqlite3 -header -csv data.db "SELECT * FROM config;"
```
This is nice and all, but it's inconvinient to type out all these SQL commands. Therefore, I wrote two little bash scripts.
**`sqlite3_getkv`**
```bash
#!/bin/sh
set -o errexit
set -o nounset
set -o pipefail
show_usage() {
echo "Usage: sqlite3_getkv [db_file] [key]"
exit 1
}
# Check argument count
if [ "$#" -ne 2 ]; then
show_usage
fi
# Initalize database file is not already
sqlite3 "$1" "CREATE TABLE IF NOT EXISTS config(name TEXT NOT NULL UNIQUE, value TEXT);"
# Get value from key
sqlite3 "$1" "SELECT value FROM CONFIG where name='$2';"
```
**`ssqlite3_setkv`**
```bash
#!/bin/sh
set -o errexit
set -o nounset
set -o pipefail
show_usage() {
echo "Usage: sqlite3_setkv [db_file] [key] [value]"
exit 1
}
# Check argument count
if [ "$#" -ne 3 ]; then
show_usage
fi
# Initalize database file is not already
sqlite3 "$1" "CREATE TABLE IF NOT EXISTS config(name TEXT NOT NULL UNIQUE, value TEXT);"
# Set key-value pair
sqlite3 "$1" "INSERT INTO config(name, value) VALUES ('$2', '$3') ON CONFLICT(name) DO UPDATE SET value=excluded.value;"
```
**Example Usage:**
```
$ ./sqlite3_setkv.sh test.db a 4
$ ./sqlite3_setkv.sh test.db c 5
$ ./sqlite3_getkv.sh test.db a
4
$ ./sqlite3_setkv.sh test.db a 5
$ ./sqlite3_getkv.sh test.db a
5
```
[^1]: Somehow my idea of easier, simpler, and more maintainable is writing bash scripts.