Posts | Tags | Archive

Adding HTTPS support to this site

Yesterday I finally ticked something off my todo list that'd been on there for a while - I added proper HTTPS support to this site, powered by Let's Encrypt.

The Qualys SSL Labs SSL Server Test now gives this site an A+ rating. You can see all the certs (past and present) for it on crt.sh, an online certificate search tool.

For those interested in how things work:

Let's Encrypt is a "free, automated, and open" certificate authority. They provide certificates via an automatable process that are valid for 90 days.

Exactly how I got it all working can be seen in my website repo, where this entire site and it's deployment scripts are kept, but here's the gist of it:

I decided to use dehydrated instead of the official Let's Encrypt client to generate the certificates. dehydrated is ~1000 lines of bash and only requires things like openssl, curl, and a few other programs that any UNIX system should already have. It takes a config file and a list of domains. Once those are in place running it from cron is easy.

The ACME protocol that Let's Encrypt uses to verify domain ownership requires the web server to respond to certain requests. This check makes sure that only someone with control over the domain can generate a cert for it. To allow those requests to return the files generated by dehydrated, a location rule in the Nginx config was added.

Initially there were problems with the first time generating certs. Since the certs didn't exist yet, Nginx was failing to start (and therefore causing the domain validation to fail). The solution was to make two sets of configs. An HTTP set, and an HTTPS set. During the renewal process, the HTTPS configs are tried, and if running Nginx fails, it falls back to the HTTP set. Switching between HTTP and HTTPS configs is done by modifying a symlink.

When all set up, this makes both generating a cert for the first time and renewing a cert as easy as deploying the site. For continual, automated renewals, a monthly cron job was added. With the expiry of certs being 90 days, this should be frequent enough that the certs should never expire.


YouTube subscriptions via RSS

The subscription feature on Youtube allows you to keep up to date with content that people upload to the site.

Since I use an RSS reader for every other blog or site that I follow, why not do the same for YouTube subscriptions?

Update

This method no longer works. The YouTube v2 API (which is what this method was using) was retired on April 20th, 2015.

To work around this, each channel must be subscribed to separately. See the RSS reader section on this support page.

The feed for anyone's subscribed videos is at the URL:

1
http://gdata.youtube.com/feeds/base/users/<userID>/newsubscriptionvideos

Where <userID> is either your YouTube account name or the long string of letters and numbers that can be found on the YouTube advanced settings page.

Try it out, it makes watching episodic content a breeze!

Warning

You'll have to have "Keep all my subscriptions private" unchecked on the YouTube privacy settings page for this to work.

This will allow anyone to access the RSS feed of your subscriptions at the url above.


Dynamic DNS client for Namecheap using bash & cron

In addition to running this website, I also run a home server. For convenience, I point a subdomain of cmetcalfe.ca at it so even though it's connected using a dynamic IP (and actually seems to change fairly frequently), I can get access to it from anywhere.

As a bit of background, the domain for this website is registered and managed through Namecheap. While they do provide a recommended DDNS client for keeping a domain's DNS updated, it only runs on Windows.

Instead, after enabling DDNS for the domain and reading Namecheap's article on using the browser to update DDNS I came up with the following dns-update script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#!/bin/sh

dns=`dig <subdomain>.cmetcalfe.ca @resolver1.opendns.com +short`
curr=`dig myip.opendns.com @resolver1.opendns.com +short`
if [ $? -eq 0 ] && [ "$curr" != "" ] && [ "$dns" != "$curr" ]; then
    curl -s "https://dynamicdns.park-your-domain.com/update?host=<subdomain>&domain=cmetcalfe.ca&password=<my passkey>" | grep -q "<ErrCount>0</ErrCount>"
    if [ $? -eq 0 ]; then
        systemd-cat -t "`basename $0`" /usr/bin/echo "Server DNS record updated ($dns -> $curr)"
    else
        systemd-cat -t "`basename $0`" /usr/bin/echo "Server DNS record update FAILED (tried $dns -> $curr)"
    fi
fi

It basically checks if the IP returned by a DNS query for the subdomain matches the current IP of the server (as reported by an OpenDNS resolver) and if it doesn't, sends a request to update the DNS. The systemd-cat commands are there just to put some record of the IP changing into the syslog. Maybe I'll do some analysis of it at some point.

From here, it's as easy as using cron to run this script every 30 minutes (/30 * * * * /usr/local/bin/dns-update) and I can be confident that my subdomain will always be pointing at my home server when I need access to it.

Note

A previous version of this article used curl -sf http://curlmyip.com to find the server's current IP address. However, after curlmyip went down for a few days, I decided to take the advice in this StackExchange answer and use OpenDNS instead.

© Carey Metcalfe. Built using Pelican. Theme is subtle by Carey Metcalfe. Based on svbhack by Giulio Fidente.