--- title: "Accessing Homelab Services Remotely with mTLS" date: 2024-07-21T07:29:20-07:00 draft: false tags: [] math: false medium_enabled: false --- One common approach to securely accessing remote services is to use a VPN. This builds an encrypted networking layer which makes it easy to use SSH, HTTPS, and other various protocols. Personally, this worked great on my Linux machines. However, I found it a pain to use on a mobile device. 1. On Android, you can only use one VPN at any given time. This is the case even when the IP ranges are not in conflict. 2. It's difficult to edit DNS host entries. Especially without root, or generating another VPN profile. On my phone, I want to be able to open my browser and go to a publicly resolvable URL to access my homelab services in a secure way. What's very common in enterprise settings is to run an authenticated proxy through a single sign-on (SSO) service. This would normally prompt for a username and password, followed sometimes with a 2FA code before redirecting you to the service you want. I don't fully trust myself to securely maintain a SSO service as a hobby. Also instead of typing in a username and password each time, I want to rely on keys instead. This is where mutual-TLS (mTLS) comes in. In the standard TLS setup, the server provides the client with its public key so that the client can send encrypted communications as well as validate that the server is who they say they are. This validation is done through trusted certificate authorities. The [CA/Browser Forum](https://cabforum.org/) is a self-regulated group that issues a [set of requirements](https://cabforum.org/working-groups/netsec/documents/) for certificate authorities to follow. Ultimately, vendors have their own policies (ex: [Mozilla](https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/)) for deciding whether to trust a SSL certificate coming from a certificate authority. However, many follow the CAB forum guidelines. When a server is configured with mTLS, not only does the client authenticate the server but the server also authenticates the client. If the client doesn't respond with the proper certificate, then the server will return a HTTP 400 error. ### Overall Setup To setup an authenticated proxy with mTLS, we will use `nginx` as the reverse proxy and Cloudflare's PKI and TLS toolkit [`cfssl`](https://github.com/cloudflare/cfssl). You can replace the `cfssl` commands with the equivalent `openssl` ones, but I won't cover that in this post. I'll have a publicly available server with the reverse proxy installed and setup to relay traffic over the VPN to my homelab. We'll use certificates generated by `cfssl` to authenticate the client and server. I also wrote about `cfssl` [in the past](https://brandonrozek.com/blog/internalca/), but I'll keep this blog post self-contained. ### Setting up the certificates `cfssl` relies on having JSON files for its configuration. First let's set up the certificate authority in a file called `csr_ca.json` replacing the fields as needed: ```json { "CN": "SomeCommonName", "key": { "algo": "rsa", "size": 4096 }, "names": [ { "C": "US", "O": "SomeOrg", "OU": "SomeOrgUnit", "ST": "New York", "L": "New York" } ] } ``` Then generate the root certificates for our certificate authority ```bash cfssl gencert -initca csr_ca.json | cfssljson -bare ca ``` This will generate the following files | Filename | Purpose | | ---------- | --------------------------- | | ca.pem | Public Certificate | | ca-key.pem | Private Key | | ca.csr | Certificate Signing Request | With these files that constitute our certificate authority, we'll generate the server and client certificates. When generating the server certificate, we need to declare which URLs the certificate is valid for. Instead of creating a new certificate for every service in my homelab, we'll make use of a wildcard certificate on a particular domain. As a personal preference, I like to keep my certificates separated by device. Create a new folder called `server` with the following in `csr_server.json` ```json { "hosts": [ "*.internal.somedomain.com" ], "key": { "algo": "rsa", "size": 4096 }, "names": [ { "C": "US", "O": "SomeOrg", "OU": "SomeOrgUnit", "ST": "New York", "L": "New York" } ] } ``` Then we similarly create three certificates relating to the server with the following command ```bash cfssl gencert -ca=../ca.pem -ca-key=../ca-key.pem csr_server.json | cfssljson -bare cert ``` Here I assumed the root certificates are stored one level up (`..`) but replace the path as necessary. To match how `nginx` expects its certificates, we'll perform some renames and concatenations. ```bash mv cert-key.pem privkey.pem mv cert.pem chain.pem cat privkey.pem > fullchain.pem cat chain.pem >> fullchain.pem ``` We'll need to run `cfssl` one more time to generate our client's certificates. Create a folder for the specific client and have the following with `csr_client.json`. ```json { "key": { "algo": "rsa", "size": 4096 }, "names": [ { "C": "US", "O": "SomeOrg", "OU": "SomeOrgUnit", "ST": "New York", "L": "New York" } ] } ``` Then generate the certificates: ```bash cfssl gencert -ca=../ca.pem -ca-key=../ca-key.pem csr_client.json | cfssljson -bare cert ``` Firefox expects the key in a specific format, so we'll ask `openssl` to create it for us. ```bash openssl pkcs12 -export -out user.pfx -inkey cert-key.pem -in cert.pem -certfile ../ca.pem ``` When running this command, it'll ask for an export password. You'll need to remember this when attempting to import this key. You can leave it blank if you're intending to use it for Firefox, but I found that Android devices won't accept this unless some password is set. Before we can import `user.pfx` on our favorite device, we need to have our device trust the root certificate authority. Since we created our own and didn't go through a CA like Let's Encrypt, devices will not trust the certificates by default. On Android you can import `ca.pem` via `More security & privacy > Encryption & credentials -> Install a certificate -> CA certificate`. Then you can install `user.pfx` via `More security & privacy > Encryption & credentials -> Install a certificate -> VPN & app user certificate`. ### Nginx Authenticated Proxy Setup Since we're looking to access multiple homelab services from our authenticated proxy, we'll make use of regexes in the `server_name` so that we only need to write one config. ```nginx server_name "~^(?.+)\.internal\.somedomain\.com$"; ``` This matches on `*.internal.somedomain.com` where the `*` wildcard means that it can be any string. The regex looks slightly different since we want to capture the wildcard portion in a variable called `$subdomain`. A good security posture is to provide a white-list of what subdomains to allow. ```nginx set $valid_subdomain 0; if ($subdomain = "X") { set $valid_subdomain 1; } if ($subdomain = "Y") { set $valid_subdomain 1; } if ($valid_subdomain = 0) { return 403; } ``` We can see here that if we don't match on `X.internal.somedomain.com` or `Y.internal.somedomain.com` that we will return a `403 Not Authorized` HTTP code. To enable mTLS, we need to turn on client verification. For this, we need the certificate of the root CA. ```nginx ssl_verify_client on; ssl_client_certificate /path/to/root/certificates/ca.pem; ``` Then when we do the proxying, we need to set the `Host` variable so that it does not have the `internal` component of the domain name. ```nginx location / { proxy_pass https://10.10.10.2; proxy_set_header Host "$subdomain.somedomain.com"; // Other options omitted for brevity } ``` The proxy URL shouldn't allow outside traffic by default. Otherwise people can bypass the authentication proxy! You can do this by either (1) only responding when the source IP matches the proxy IP, or (2) having the URL only resolvable when using a VPN. Check the bottom of this post for the whole config, but when that's all setup we can restart the `nginx` service and verify that no errors are shown. ```bash sudo systemctl restart nginx ``` With that, we should be able to securely access our homelab remotely! Give it a test, visiting `X.internal.somedomain.com` replacing the relevant parts of the URL. Your browser should then prompt for the certificate, and if all goes well you should see the service. On Android, sadly the Firefox app doesn't support mTLS, but the default Chrome browser does. On desktop, both Firefox and Chrome does support mTLS. Feel free to write in if you have any questions about your setup. Full Nginx config which I store at `/etc/nginx/conf.d/internal.conf` ```nginx map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { listen 80; listen [::]:80; server_name "~^(?.+)\.internal\.somedomain\.com$"; location / { return 301 https://$host$request_uri; } } server { listen 443 ssl; listen [::]:443 ssl; http2 on; server_name "~^(?.+)\.internal\.somedomain\.com$"; set $valid_subdomain 0; if ($subdomain = "X") { set $valid_subdomain 1; } if ($valid_subdomain = 0) { return 403; } ssl_certificate /path/to/server/certificates/fullchain.pem; ssl_certificate_key /path/to/server/certificates/privkey.pem; include /etc/letsencrypt/conf/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/conf/ssl-dhparams.pem; ssl_trusted_certificate /path/to/server/certificates/chain.pem; ssl_stapling on; ssl_stapling_verify on; ssl_verify_client on; ssl_client_certificate /path/to/root/certificates/ca.pem; location / { proxy_pass https://10.10.10.2; proxy_set_header Host "$subdomain.somedomain.com"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Proto https; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_cache_bypass $http_upgrade; } } ```