Posts | Tags | Archive

Adding HTTPS support to this site

Yesterday I finally ticked something off my todo list that'd been on there for a while - I added proper HTTPS support to this site, powered by Let's Encrypt.

The Qualys SSL Labs SSL Server Test now gives this site an A+ rating. You can see all the certs (past and present) for it on, an online certificate search tool.

For those interested in how things work:

Let's Encrypt is a "free, automated, and open" certificate authority. They provide certificates via an automatable process that are valid for 90 days.

Exactly how I got it all working can be seen in my website repo, where this entire site and it's deployment scripts are kept, but here's the gist of it:

I decided to use dehydrated instead of the official Let's Encrypt client to generate the certificates. dehydrated is ~1000 lines of bash and only requires things like openssl, curl, and a few other programs that any UNIX system should already have. It takes a config file and a list of domains. Once those are in place running it from cron is easy.

The ACME protocol that Let's Encrypt uses to verify domain ownership requires the web server to respond to certain requests. This check makes sure that only someone with control over the domain can generate a cert for it. To allow those requests to return the files generated by dehydrated, a location rule in the Nginx config was added.

Initially there were problems with the first time generating certs. Since the certs didn't exist yet, Nginx was failing to start (and therefore causing the domain validation to fail). The solution was to make two sets of configs. An HTTP set, and an HTTPS set. During the renewal process, the HTTPS configs are tried, and if running Nginx fails, it falls back to the HTTP set. Switching between HTTP and HTTPS configs is done by modifying a symlink.

When all set up, this makes both generating a cert for the first time and renewing a cert as easy as deploying the site. For continual, automated renewals, a monthly cron job was added. With the expiry of certs being 90 days, this should be frequent enough that the certs should never expire.

YouTube subscriptions via RSS

The subscription feature on Youtube allows you to keep up to date with content that people upload to the site.

Since I use an RSS reader for every other blog or site that I follow, why not do the same for YouTube subscriptions?


This method no longer works. The YouTube v2 API (which is what this method was using) was retired on April 20th, 2015.

To work around this, each channel must be subscribed to separately. See the RSS reader section on this support page.

The feed for anyone's subscribed videos is at the URL:


Where <userID> is either your YouTube account name or the long string of letters and numbers that can be found on the YouTube advanced settings page.

Try it out, it makes watching episodic content a breeze!


You'll have to have "Keep all my subscriptions private" unchecked on the YouTube privacy settings page for this to work.

This will allow anyone to access the RSS feed of your subscriptions at the url above.

Dynamic DNS client for Namecheap using bash & cron

In addition to running this website, I also run a home server. For convenience, I point a subdomain of at it so even though it's connected using a dynamic IP (and actually seems to change fairly frequently), I can get access to it from anywhere.

As a bit of background, the domain for this website is registered and managed through Namecheap. While they do provide a recommended DDNS client for keeping a domain's DNS updated, it only runs on Windows.

Instead, after enabling DDNS for the domain and reading Namecheap's article on using the browser to update DDNS I came up with the following dns-update script.


# Abort if anything goes wrong (negates the need for error-checking)
set -e

# Uses drill instead of dig
resolve() {
    #dig "$1" +short 2> /dev/null
    line=$(drill "$1" 2> /dev/null | sed '/;;.*$/d;/^\s*$/d' | grep "$1")
    echo "$line" | head -1 | cut -f5

dns=$(resolve <subdomain>
if [ "$dns" != "$curr" ]; then
    if curl -s "<subdomain>&<my passkey>" | grep -q "<ErrCount>0</ErrCount>"; then
        echo "Server DNS record updated ($dns -> $curr)"
        echo "Server DNS record update FAILED (tried $dns -> $curr)"

It basically checks if the IP returned by a DNS query for the subdomain matches the current IP of the server (as reported by an OpenDNS resolver) and if it doesn't, sends a request to update the DNS. The echo commands are there just to output some record of the IP changing. Maybe I'll do some analysis of it at some point.

To run the script every 30 minutes and redirect any output from it to the syslog, the following crontab entry can be used:

*/30 * * * * /path/to/dns-update | /usr/bin/logger -t dns-update

With the script automatically running every 30 minutes I can now be confident that my subdomain will always be pointing at my home server whenever I need access to it.


A previous version of this article used curl -sf to find the server's current IP address. However, after curlmyip went down for a few days, I decided to take the advice in this StackExchange answer and use OpenDNS instead.

© Carey Metcalfe. Built using Pelican. Theme is subtle by Carey Metcalfe. Based on svbhack by Giulio Fidente.