Tag: website

Proxied access to the Namecheap DNS API

Background

Both this site and my home server use HTTPS certificates provided by Let's Encrypt. Previously I was using the http-01 challenge method, but once wildcard certificates became available, I decided to switch to the dns-01 method for simplicity.

My domains are currently registered through Namecheap which provides an API for modifying DNS records. The only catch is that it can only be accessed via manually-whitelisted IP addresses. Because the server I'm trying to access the API from is assigned a dynamic IP, this restriction was a bit of an issue.

Back in November when I initially made the switch to the DNS-based challenge, I set it up the easy way: I manually added my current IP to the whitelist, added a TODO entry to fix it somehow, and set a reminder scheduled for the week before the cert would expire telling me to update the IP whitelist. Fast-forward to today when I was reminded to update my IP whitelist. Instead of continuing to kick the can down the road, I decided to actually fix the issue.

Setting up a temporary proxy

My home server is connected to the internet though a normal residential connection which is assigned a dynamic IP address. However, the server that I run this website on is configured with a static IP since it's a hosted VPS. By proxying all traffic to the Namecheap API through my VPS, I could add my VPS's static IP to the whitelist and not have to worry about my home IP changing all the time.

SSH is perfect tool for this. The OpenSSH client supports port forwarding using the -D flag. By then setting the HTTP[S]_PROXY environment variables to point to the forwarded port, programs that support those environment variables will transparently forward all their requests through the proxy.

After a bunch of research and testing, I came up with the following script to easily set up the temporary proxy:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/sh
# Source this file to automatically setup and teardown an HTTP* proxy through cmetcalfe.ca
# Use the PROXY_PORT and PROXY_DEST environment variables to customize the proxy

PROXY_PORT="${PROXY_PORT:-11111}"
PROXY_DEST="${PROXY_DEST:-<username>@cmetcalfe.ca}"
PID="$$"

# Teardown the SSH connection when the script exits
trap 'ssh -q -S ".ctrl-socket-$PID" -O exit "$PROXY_DEST"' EXIT

# Set up an SSH tunnel and wait for the port to be forwarded before continuing
if ! ssh -o ExitOnForwardFailure=yes -M -S ".ctrl-socket-$PID" -f -N -D "$PROXY_PORT" "$PROXY_DEST"; then
    echo "Failed to open SSH tunnel, exiting"
    exit 1
fi

# Set environment variables to redirect HTTP* traffic through the proxy
export HTTP_PROXY="socks5://127.0.0.1:$PROXY_PORT"
export HTTPS_PROXY="$HTTP_PROXY"

This script is saved as ~/.config/tempproxy.rc and can be sourced to automatically set up a proxy session and have it be torn down when the script exits.

You'll want to use key-based authentication with an unencrypted private key so that you don't need to type anything to initiate the SSH session. For this reason you'll probably want to create a limited user on the target system that can only really do port forwarding. There's a good post on this here.

Talking to the API through the proxy

To allow programs that use the tempproxy.rc script to talk to the Namecheap API, the IP address of the VPS was added to the whitelist. Now that the proxying issue was taken care of, I just needed to wire up the actual certificate renewal process to use it.

The tool I'm using to talk to the Namecheap DNS API is lexicon. It can handle manipulating the DNS records of a ton of providers and integrates really nicely with my ACME client of choice, dehydrated. Also, because it's using the PyNamecheap Python library, which in turn uses requests under the hood, it will automatically use the HTTP*_PROXY environment variables when making requests.

The only tricky bit is that the base install of lexicon won't automatically pull in the packages required for accessing the Namecheap API. Likewise, requests won't install packages to support SOCKS proxies. To install all the required packages you'll need to run a command like:

1
pip install 'requests[socks]' 'dns-lexicon[namecheap]'

Since lexicon can use environment variables as configuration, I created another small source-able file at ~/.config/lexicon.rc:

1
2
3
4
5
6
7
8
#!/bin/sh
# Sets environment variables for accessing the Namecheap API

# Turn on API access and get an API token here:
# https://ap.www.namecheap.com/settings/tools/apiaccess/
export PROVIDER=namecheap
export LEXICON_NAMECHEAP_USERNAME=<username>
export LEXICON_NAMECHEAP_TOKEN=<api token>

With that in place, the existing automation script that I had previously set up when I switched to using the dns-01 challenge just needed some minor tweaks to source the two new files. I've provided the script below, along with some other useful ones that use the DNS API.

Scripts

do-letsencrypt.sh

This is the main script that handles all the domain renewals. It's called via cron on the first of every month.

Fun fact: The previous http-01 version of this script was a mess - it involved juggling nginx configurations with symlinks, manually opening up the .well-known/acme-challenge endpoint, and some other terrible hacks. This version is much nicer.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/sh
# Generate/renew Let's Encrypt certificates using dehydrated and lexicon

set -e

# Set configuration variables and setup the proxy
source ~/.config/lexicon.rc
source ~/.config/tempproxy.rc

cd /etc/nginx/ssl

# Update the certs.
# Hook script copied verbatim from
# https://github.com/AnalogJ/lexicon/blob/master/examples/dehydrated.default.sh
if ! dehydrated --accept-terms --cron --challenge dns-01 --hook dehydrated.default.sh; then
    echo "Failed to renew certificates"
    exit 1
fi

# Reload the nginx config if it's currently running
if ! systemctl is-active nginx.service > /dev/null; then
    echo "nginx isn't running, not reloading it"
    exit 0
fi

if ! systemctl reload nginx.service; then
    systemctl status nginx.service
    echo "Failed to reload nginx config, check the error log"
    exit 1
fi

dns.sh

This one is really simple - it just augments the normal lexicon CLI interface with the proxy and configuration variables.

1
2
3
4
5
6
7
#!/bin/sh
# A simple wrapper around lexicon.
# Sets up the proxy and configuration, then passes all the arguments to the configured provider

source ~/.config/lexicon.rc
source ~/.config/tempproxy.rc
lexicon "$PROVIDER" "$@"
1
2
3
4
5
$ ./dns.sh list cmetcalfe.ca A
ID       TYPE NAME             CONTENT         TTL
-------- ---- ---------------- --------------- ----
xxxxxxxx A    @.cmetcalfe.ca   xxx.xxx.xxx.xxx 1800
xxxxxxxx A    www.cmetcalfe.ca xxx.xxx.xxx.xxx 1800

dns-update.sh

An updated version of my previous script that uses the API instead of the DynamicDNS service Namecheap provides. Called via cron every 30 minutes.

Update

A previous version of this script didn't try to delete the DNS entry before setting the new one, resulting in multiple IPs being set as the same A record.

This is because the update function of the Namecheap plugin for lexicon is implemented as a 2-step delete and add. When it tries to delete the old record, it looks for one with the same content as the old one. This means that if you update a record with a new IP, it doesn't find the old record to delete first, leaving you with all of your old IPs set as individual A records.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/bin/sh
# Check the server's current IP against the IP listed in its DNS entry.
# Set the current IP if they differ.

set -e

resolve() {
    line=$(drill "$1" @resolver1.opendns.com 2> /dev/null | sed '/;;.*$/d;/^\s*$/d' | grep "$1")
    echo "$line" | head -1 | cut -f5
}

dns=$(resolve <subdomain>.cmetcalfe.ca)
curr=$(resolve myip.opendns.com)
if [ "$dns" != "$curr" ]; then
    source ~/.config/lexicon.rc
    source ~/.config/tempproxy.rc
    if ! lexicon "$PROVIDER" delete cmetcalfe.ca A --name "<subdomain>.cmetcalfe.ca"; then
        echo "Failed to delete old DNS record, not setting the new one."
        exit 1
    fi
    if lexicon "$PROVIDER" update cmetcalfe.ca A --name "<subdomain>.cmetcalfe.ca" --content "$curr" --ttl=900; then
        echo "Server DNS record updated ($dns -> $curr)"
    else
        echo "Server DNS record update FAILED (tried $dns -> $curr)"
    fi
fi

Bank of Canada currency conversion rates in the terminal

Preface

In this entry I'll use two command line tools (curl and jq) to request currency conversion rates from the Bank of Canada. Now obviously getting the same data through the normal web interface is possible as well. However, doing it this way returns the data in a machine-readable form, allowing it to be customized or used as input into other programs.

The basic flow will be request the data using curl, then pipe it to jq for processing.

Note

This article will request the USD/CAD exchange rate from 2017-01-01 to 2017-01-10. It should be fairly obvious how to switch this for any other currencies or date ranges. See the Appendix for more information and caveats with the dates and the conversion codes.

We'll go from seeing the raw data the server returns to something that we can more easily use. To skip to the final result, click here.

Update

The URLs for the data were originally reverse-engineered from the website. They have since been turned into a supported API. See the API documentation for more details.

Lets go!

Hitting the server without any processing will give the following result:

1
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
"terms":{
    "url": "http://www.bankofcanada.ca/terms/"
},
"seriesDetail":{
"FXUSDCAD":{"label":"USD/CAD","description":"US dollar to Canadian dollar daily exchange rate"}
},
"observations":[
{"d":"2017-01-02","FXUSDCAD":{"e":-64}},
{"d":"2017-01-03","FXUSDCAD":{"v":1.3435}},
{"d":"2017-01-04","FXUSDCAD":{"v":1.3315}},
{"d":"2017-01-05","FXUSDCAD":{"v":1.3244}},
{"d":"2017-01-06","FXUSDCAD":{"v":1.3214}},
{"d":"2017-01-09","FXUSDCAD":{"v":1.3240}},
{"d":"2017-01-10","FXUSDCAD":{"v":1.3213}}
]
}

To get the information we want, we need to iterate over the elements in the observations list and map the date (d) to the conversion rate (FXUSDCAD.v). Since this will create a list of single-element dictionaries, we'll also need to "add" (merge) them together.

To do this with jq, it looks like this:

1
2
3
4
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add
'
1
2
3
4
5
6
7
8
9
{
  "2017-01-02": null,
  "2017-01-03": 1.3435,
  "2017-01-04": 1.3315,
  "2017-01-05": 1.3244,
  "2017-01-06": 1.3214,
  "2017-01-09": 1.324,
  "2017-01-10": 1.3213
}

That first null entry where there was no data is going to cause problems later, so let's just delete it using the del function:

1
2
3
4
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add | del(.[] | nulls)
'
1
2
3
4
5
6
7
8
{
  "2017-01-03": 1.3435,
  "2017-01-04": 1.3315,
  "2017-01-05": 1.3244,
  "2017-01-06": 1.3214,
  "2017-01-09": 1.324,
  "2017-01-10": 1.3213
}

Now this may be good enough for most purposes, but it would be useful to compute some other statistics from this data. For example, we want to know the average exchange rate for the date range. Now that we have the data in an easy to use format, computing an average with jq is just a matter of passing the values to an add / length filter. For example:

1
2
3
4
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add | del(.[] | nulls) | add / length
'
1
1.327683333333333

We can now construct an object that has both the average, as well as all the data. Putting it all together we get:

1
2
3
4
5
6
7
8
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add | del(.[] | nulls) |
    {
        "average": (add / length),
        "values": .
    }
'
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
  "average": 1.327683333333333,
  "values": {
    "2017-01-03": 1.3435,
    "2017-01-04": 1.3315,
    "2017-01-05": 1.3244,
    "2017-01-06": 1.3214,
    "2017-01-09": 1.324,
    "2017-01-10": 1.3213
  }
}

And that's it! Hopefully you found this useful, if not for it's stated purpose of retrieving exchange rates, then at least for getting more familiar with jq. It's pretty much the best tool out there when it comes to manipulating JSON on the command line.

Appendix

Date ranges

  • The server will only ever return data from 2017-01-03 and onwards. If the start range is before that, it doesn't return an error, it just doesn't return any data for those dates.
  • If the end date isn't specified, the server assumes the current date.
  • There doesn't seem to be any limit on the amount of data retrieved

Currency codes

A list of codes can be easily scraped from the lookup page. The command and its result are below for reference.

1
curl -s "https://www.bankofcanada.ca/rates/exchange/daily-exchange-rates-lookup/" | grep -Eo '"FX......".*<'
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
"FXAUDCAD">Australian dollar<
"FXBRLCAD">Brazilian real<
"FXCNYCAD">Chinese renminbi<
"FXEURCAD">European euro<
"FXHKDCAD">Hong Kong dollar<
"FXINRCAD">Indian rupee<
"FXIDRCAD">Indonesian rupiah<
"FXJPYCAD">Japanese yen<
"FXMYRCAD">Malaysian ringgit<
"FXMXNCAD">Mexican peso<
"FXNZDCAD">New Zealand dollar<
"FXNOKCAD">Norwegian krone<
"FXPENCAD">Peruvian new sol<
"FXRUBCAD">Russian ruble<
"FXSARCAD">Saudi riyal<
"FXSGDCAD">Singapore dollar<
"FXZARCAD">South African rand<
"FXKRWCAD">South Korean won<
"FXSEKCAD">Swedish krona<
"FXCHFCAD">Swiss franc<
"FXTWDCAD">Taiwanese dollar<
"FXTHBCAD">Thai baht<
"FXTRYCAD">Turkish lira<
"FXGBPCAD">UK pound sterling<
"FXUSDCAD">US dollar<
"FXVNDCAD">Vietnamese dong<

Adding HTTPS support to this site

Yesterday I finally ticked something off my todo list that'd been on there for a while - I added proper HTTPS support to this site, powered by Let's Encrypt.

The Qualys SSL Labs SSL Server Test now gives this site an A+ rating. You can see all the certs (past and present) for it on crt.sh, an online certificate search tool.

For those interested in how things work:

Let's Encrypt is a "free, automated, and open" certificate authority. They provide certificates via an automatable process that are valid for 90 days.

Exactly how I got it all working can be seen in my website repo, where this entire site and it's deployment scripts are kept, but here's the gist of it:

I decided to use dehydrated instead of the official Let's Encrypt client to generate the certificates. dehydrated is ~1000 lines of bash and only requires things like openssl, curl, and a few other programs that any UNIX system should already have. It takes a config file and a list of domains. Once those are in place running it from cron is easy.

The ACME protocol that Let's Encrypt uses to verify domain ownership requires the web server to respond to certain requests. This check makes sure that only someone with control over the domain can generate a cert for it. To allow those requests to return the files generated by dehydrated, a location rule in the Nginx config was added.

Initially there were problems with the first time generating certs. Since the certs didn't exist yet, Nginx was failing to start (and therefore causing the domain validation to fail). The solution was to make two sets of configs. An HTTP set, and an HTTPS set. During the renewal process, the HTTPS configs are tried, and if running Nginx fails, it falls back to the HTTP set. Switching between HTTP and HTTPS configs is done by modifying a symlink.

When all set up, this makes both generating a cert for the first time and renewing a cert as easy as deploying the site. For continual, automated renewals, a monthly cron job was added. With the expiry of certs being 90 days, this should be frequent enough that the certs should never expire.

© Carey Metcalfe. Built using Pelican. Theme is subtle by Carey Metcalfe. Based on svbhack by Giulio Fidente.