Posts | Tags | Archive

Disabling emergency alerts on Android

Warning

You probably shouldn't do this. If you already know this and don't care, read on.

Background

In Canada, the CRTC mandates that telecommunications companies and other organizations broadcast warning messages over various mediums, one of which is via cell phones.

Overall, this is a good idea that saves lives. However, the implementation is incredibly bad. All alerts, no matter the severity, are broadcast at the very highest level (called "Presidential" by Android phones), meaning that they can't be silenced or disabled by the user. In fact, the CRTC actually tested this and found that the only reliable way to silence the alerts was to put the phone on airplane mode.

After I received three alerts all about an hour apart starting at 2am telling me about a missing child over 1,300km away, I'd had enough.

This is a "boy who cried wolf" situation here. If these alerts ever:

  1. Implement a basic distance filter so I'm not woken up by things happening nowhere near me
  2. Start using an appropriate alert level instead of using the highest level for everything

I'll re-enable them. For now though, I don't want to be repeatedly woken up in the middle of the night because someone a full day's drive from me went missing. Maybe that's cold-hearted, but realistically what was I even going to do about it?

On Android at least, the lower-level emergency alerts can be configured to not make noise when the device is silenced, but not disabled entirely. This seems a lot better as it won't wake people up, but will show them the information the next time they look at the phone. Let's do that instead.

For now though, I'm turning everything off. Hopefully I don't get killed by a tornado or something.

How to disable ALL alerts

This is surprisingly simple - the application responsible for handling the broadcasts can be uninstalled.

Warning

This will disable ALL emergency broadcasts. If you live in an area that regularly has severe weather or don't have a backup method for receiving these alerts, this is definitely not a good idea. It also probably violates some sort of policy or something (not a lawyer). You are responsible for your own decisions.

To uninstall it, enable developer options and debugging on the phone, connect it to a computer with ADB installed and run:

1
adb -d shell "pm uninstall -k com.android.cellbroadcastreceiver"

Thanks to this reddit post for the method of disabling the alerts.

Why is it like this?

Which organization is responsible? Can this be changed?

Below is a rough dump of some details I found while trying to dig up which organization is responsible for this with the hopes of actually affecting some change.

On 2017-04-06 the CRTC issued Telecom Regulatory Policy CRTC 2017-91 which "directs wireless service providers to implement wireless public alerting capability on their long-term evolution networks by 6 April 2018". Relevant information pulled from that page:

34. SOREM has established a definitive list of emergency alert message types for immediate broadcast by alert distributors. These messages are defined as “Broadcast Immediately” (BI).

In a footnote on the page it defines SOREM as:

SOREM is a federal/provincial/territorial body that works to harmonize and improve emergency practices across the country. It includes representatives from provincial and territorial emergency management organizations as well as Public Safety Canada.

There was some disagreement on making the alerts mandatory:

35. Individual interveners’ views were divided among mandatory receipt, an opt-in mechanism, and the choice to opt out. Technology solutions providers indicated mixed support between mandatory reception and the choice to opt out. EMOs strongly supported mandatory receipt.

36. The OPC submitted that many people consider their mobile devices to be private and personal, and that some individuals could consider emergency alerts to be intrusive. It recommended that the Commission review how opt-out mechanisms have been implemented in the United States and other jurisdictions, and whether the ability to opt out has had a significant negative effect on alerting individuals in an emergency situation. It submitted that individuals should be allowed to opt out if the effect is found to be minimal.

39. Some WSPs provided insight on past experiences regarding the distribution of AMBER Alerts over their parent BDUs. For example, in March 2016, the Ontario Provincial Police issued an AMBER Alert, which was distributed by BDUs. Many television viewers were upset by the interruption of their television viewing and contacted their BDU customer service centres and, in some cases, 9-1-1 to complain.

And finally, in a section titled "Commission’s analysis and determinations", a decision was made:

41. Although some interveners suggested modifying the BI list, there was no consensus on which specific categories should be added or removed.

42. However, the question in this proceeding is not whether the BI list used in the broadcasting context should be changed, but whether the same or a different approach should be taken with respect to emergency alert messages sent to mobile devices compared to those sent via broadcasting emergency alerting. The current BI list is designed to protect against threats to life and property. It is therefore important to promote consistency between broadcasting and wireless emergency alerts so that Canadians can receive the same alerts regardless of transmission medium.

43. In light of the above, the Commission mandates the reception of emergency alert messages on mobile devices, based on the BI list developed by SOREM, as amended from time to time.

This seems to say that the list of what alerts exist and their priority is managed by SOREM.

To back this up, the CRTC's Emergency Alerts and the National Public Alerting System page has a section titled "Consistent appearance and sound of alerts" with the quote:

The visual appearance, sound, content, and other aspects of emergency alerts are guided by the Senior Officials Responsible for Emergency Management. This federal, provincial and territorial body works to harmonize and improve emergency practices across the country.

That section also includes a link to the National Public Alerting System's Common Look and Feel Guidance which has some more information in it:

From the 7.3 SOREM Public Alerting Layer section:

The SOREM Public Alerting Layer establishes parameters that are referred to by the CLF. The current version of the specification can be found in Appendix B.

From the APPENDIX B – SOREM Public Alerting Layer section:

The SOREM Public Alerting Layer establishes a number of CAP s which are used to support the public alerting community in Canada. In a separate document, SOREM maintains the “Broadcast Immediate Events List”, which works in conjunction with this layer. The current version of this list is available at https://alerts.pelmorex.com/highpriorityalerts/

That link goes to a page that doesn't exist, but by visiting https://alerts.pelmorex.com directly and looking around, it looks like it should point to: List of Event Codes for Broadcast Immediate Alerts (version 2.1) (last updated 2022-05-25).

This document contains a table titled "EVENT CODE DESCRIPTIONS FOR EMERGENCY PUBLIC ALERT BROADCAST IMMEDIATE EVENTS" that defines what codes map to what alert severity. For example, the reason why an AMBER alert is sent at the highest level is because of this entry:

Event Code: amber
CAP Code for Urgency: Immediate
CAP Code for Severity: Severe or Extreme
CAP Code for Certainty: Observed
Description: Amber Alert: Issued by police services when a child has been abducted and it is believed that his/her life is in grave danger. An Amber Alert provides the public with immediate and up-to-date information about a child abduction and solicits the public's help in the safe and swift return of the child.

Where the following definitions for the CAP codes are used:

  • "Immediate" - Responsive action SHOULD be taken immediately
  • "Extreme" - Extraordinary threat to life
  • "Severe" - Significant threat to life
  • "Observed" - Determined to have occurred or to be ongoing

This is where the "everything at the highest alert level" issue comes from. Aside from testMessage, literally every other event code on this list has the same Immediate + Severe/Extreme classification. This includes everything from silver ("a missing, wandering older adult who may be suffering from a documented cognitive disability or degenerative memory disorder") to meteor ("A natural object of extraterrestrial origin [...] hits the ground with the potential for catastrophic damage.")

Summary

The CRTC requires that Wireless Service Providers (WSPs) implement the alerts as defined by SOREM. SOREM's list of alerts defines an AMBER alert as requiring the same amount of urgency and disruption as a planet-killing meteor dropping from the sky.

This seems to be by design, but if not, SOREM is the organization that would need to fix it. As an aside, it seems like the CRTC wouldn't even have to get involved as the policy just defers to the "BI list developed by SOREM, as amended from time to time" which seems pretty loose. Maybe I'm missing something though.

I didn't find anything that specified any details about how wide the affected range for these alerts should or should not be. It seems like that's left up to the organization issuing the alert.


Diagnosing periodic high ping times on macOS

Background

Up until a few months ago, my home network setup was pretty terrible. The entire house ran off the WiFi provided by an OpenWRT-powered TP-Link WDR3600 from 2013 in the far corner of the house. So when I fired up a game on my laptop via Steam in-home streaming and got unplayable lag and stuttering, I just figured that the issue was the network and left it at that. After all, the gaming PC was connected via WiFI from the opposite corner of the house with no shortage of walls in between.

In November 2018, I finally got serious about the network upgrade I had been planning. I'll probably do a more in-depth post about this at some point, but to make a long story short: Most things are wired and what isn't is connected via a modern, centrally-located access point. Performance is noticeably better in general and amazingly better for high-bandwidth tasks like transferring files.

So when I excitedly fired up the same game on my laptop, I was pretty surprised that it was still unplayable. At this point I should clarify that the game was not something that requires a CRT and wired controller for super-low response times. So when I say unplayable, I mean legitimately unplayable. I'm talking periodic lag spikes that caused the video to drop a few seconds or more behind the inputs. Even turn-based games would have issues with the amount of lag.

Narrowing the scope

To make sure it wasn't an issue with the streaming software, the gaming PC, or the network, I used ping to test the latency to the gateway from a few hosts. All the computers except my laptop had normal ping times. Running it on my laptop looked like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
$ ping pfsense.lan | tee >(grep -oP 'time=\K\S*' | spark)
PING pfsense.lan (192.168.1.1): 56 data bytes
64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=1.574 ms
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.996 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=1.568 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=92.153 ms
64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=39.079 ms
64 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=14.307 ms
64 bytes from 192.168.1.1: icmp_seq=6 ttl=64 time=1.570 ms
64 bytes from 192.168.1.1: icmp_seq=7 ttl=64 time=1.636 ms
64 bytes from 192.168.1.1: icmp_seq=8 ttl=64 time=1.698 ms
64 bytes from 192.168.1.1: icmp_seq=9 ttl=64 time=1.568 ms
64 bytes from 192.168.1.1: icmp_seq=10 ttl=64 time=1.550 ms
64 bytes from 192.168.1.1: icmp_seq=11 ttl=64 time=1.106 ms
64 bytes from 192.168.1.1: icmp_seq=12 ttl=64 time=1.832 ms
64 bytes from 192.168.1.1: icmp_seq=13 ttl=64 time=1.816 ms
64 bytes from 192.168.1.1: icmp_seq=14 ttl=64 time=36.598 ms
64 bytes from 192.168.1.1: icmp_seq=15 ttl=64 time=42.863 ms
64 bytes from 192.168.1.1: icmp_seq=16 ttl=64 time=20.747 ms
64 bytes from 192.168.1.1: icmp_seq=17 ttl=64 time=3.112 ms
64 bytes from 192.168.1.1: icmp_seq=18 ttl=64 time=1.695 ms
64 bytes from 192.168.1.1: icmp_seq=19 ttl=64 time=1.795 ms
64 bytes from 192.168.1.1: icmp_seq=20 ttl=64 time=1.504 ms
64 bytes from 192.168.1.1: icmp_seq=21 ttl=64 time=1.644 ms
64 bytes from 192.168.1.1: icmp_seq=22 ttl=64 time=1.732 ms
64 bytes from 192.168.1.1: icmp_seq=23 ttl=64 time=1.581 ms
64 bytes from 192.168.1.1: icmp_seq=24 ttl=64 time=1.794 ms
64 bytes from 192.168.1.1: icmp_seq=25 ttl=64 time=33.367 ms
64 bytes from 192.168.1.1: icmp_seq=26 ttl=64 time=124.282 ms
64 bytes from 192.168.1.1: icmp_seq=27 ttl=64 time=97.719 ms
64 bytes from 192.168.1.1: icmp_seq=28 ttl=64 time=76.501 ms
64 bytes from 192.168.1.1: icmp_seq=29 ttl=64 time=1.734 ms
64 bytes from 192.168.1.1: icmp_seq=30 ttl=64 time=1.622 ms
64 bytes from 192.168.1.1: icmp_seq=31 ttl=64 time=1.758 ms
64 bytes from 192.168.1.1: icmp_seq=32 ttl=64 time=1.578 ms
64 bytes from 192.168.1.1: icmp_seq=33 ttl=64 time=1.527 ms
64 bytes from 192.168.1.1: icmp_seq=34 ttl=64 time=1.017 ms
^C
 ▁▁▁▆▃▁▁▁▁▁▁▁▁▁▃▃▂▁▁▁▁▁▁▁▁▂█▆▅▁▁▁▁▁▁

(sweet graph at the end courtesy of spark)

Notice the periodic spikes. Clearly there's something going on with the network connection on my laptop. Connecting via a wired connection and running it again confirmed that it was only happening on WiFi.

WiFi debugging

Now that I had narrowed the issue down to an issue with the WiFi, I could start trying to debug it.

After coming across this very helpful post explaining how, I enabled WiFi debug logging by running:

1
sudo /System/Library/PrivateFrameworks/Apple80211.framework/Versions/A/Resources/airport en0 debug +AllUserland

(remember to turn this off after, airport is pretty chatty with debug logs enabled)

I then watched the wifi log (sudo tail -f /var/log/wifi.log) as I ran the same ping command. Whenever I saw ping times spike, I also saw the following entries in the WiFi log:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Tue Nov 27 02:01:33.764 IPC: <airportd[68]> ADDED XPC CLIENT CONNECTION [QSyncthingTray (pid=14397, euid=501, egid=20)]
Tue Nov 27 02:01:33.765 Info: <airportd[68]> SCAN request received from pid 14397 (QSyncthingTray) with priority 0
Tue Nov 27 02:01:33.766 Scan: <airportd[68]> Cache-assisted scan request for QSyncthingTray on channel 1 does not require a live scan
Tue Nov 27 02:01:33.766 Scan: <airportd[68]> Cache-assisted scan request for QSyncthingTray on channel 2 does not require a live scan
Tue Nov 27 02:01:33.766 Scan: <airportd[68]> Cache-assisted scan request for QSyncthingTray on channel 3 does not require a live scan
Tue Nov 27 02:01:33.766 Scan: <airportd[68]> Cache-assisted scan request for QSyncthingTray on channel 4 does not require a live scan
Tue Nov 27 02:01:33.766 Scan: <airportd[68]> Cache-assisted scan request for QSyncthingTray on channel 5 does not require a live scan
Tue Nov 27 02:01:33.766 Scan: <airportd[68]> Cache-assisted scan request for QSyncthingTray on channel 6 does not require a live scan
Tue Nov 27 02:01:33.766 Scan: <airportd[68]> Cache-assisted scan request for QSyncthingTray does not require a live scan
Tue Nov 27 02:01:33.766 AutoJoin: <airportd[68]> Successful cache-assisted scan request for QSyncthingTray with channels {(
Tue Nov 27 02:01:33.766     <CWChannel: 0x7f9073514e70> [channelNumber=1(2GHz), channelWidth={20MHz}, active],
Tue Nov 27 02:01:33.766     <CWChannel: 0x7f90735507b0> [channelNumber=2(2GHz), channelWidth={20MHz}, active],
Tue Nov 27 02:01:33.766     <CWChannel: 0x7f9073526370> [channelNumber=3(2GHz), channelWidth={20MHz}, active],
Tue Nov 27 02:01:33.766     <CWChannel: 0x7f9073526420> [channelNumber=4(2GHz), channelWidth={20MHz}, active],
Tue Nov 27 02:01:33.766     <CWChannel: 0x7f907354ddb0> [channelNumber=5(2GHz), channelWidth={20MHz}, active],
Tue Nov 27 02:01:33.766     <CWChannel: 0x7f907354c6c0> [channelNumber=6(2GHz), channelWidth={20MHz}, active]
Tue Nov 27 02:01:33.766 )} took 0.0007 seconds, returned 5 results
< insert logs like above for all channels all the way up to 144 in batches of 6 >
Tue Nov 27 02:01:33.779 IPC: <airportd[68]> INVALIDATED XPC CLIENT CONNECTION [QSyncthingTray (pid=14397, euid=501, egid=20)]
< 10 seconds pass>
Tue Nov 27 02:01:43.847 IPC: <airportd[68]> ADDED XPC CLIENT CONNECTION [QSyncthingTray (pid=14397, euid=501, egid=20)]
< same scanning logs as above >
Tue Nov 27 02:01:46.741 IPC: <airportd[68]> INVALIDATED XPC CLIENT CONNECTION [QSyncthingTray (pid=14397, euid=501, egid=20)]
< ... and repeat >

This pretty clearly indicates that the QSyncthingTray application (a tray icon to monitor Syncthing) was periodically requesting WiFi scans which, in turn, caused the bursts of latency. After quitting the application and running ping again, the times were all <2ms, even after running for a few minutes. Also, when I started up the game again to test it, the lag was completely gone!

Digging deeper

Since QSyncthingTray is open source, I filed a bug report on the project and dug into the code to see if I could pinpoint the issue. After finding nothing out of the ordinary and doing a bunch of searching the internet, it turns out that it's not an issue with QSyncthingTray at all, but with Qt, a cross-platform SDK that QSyncthingTray and many other programs use. Some classmates and I actually used it to make a game called ParticleStorm back in university.

There are open bug reports against this issue from 2013, 2014, and 2015. Additionally, there are loads of blog posts and per-program workarounds. It's a little disappointing that just running a network-enabled Qt application can seriously disrupt real-time applications on the same system. Granted, I have no knowledge of the complexities of this issue, but at the very least it seems the documentation could be clearer on the issue so developers stop getting surprised by it.

Anyway, the "fix" is to set QT_BEARER_POLL_TIMEOUT=-1 as an environment variable on your system. Honestly, I've just stopped using QSyncthingTray instead. Not that it's not useful, I just realized that I didn't really need it. Syncthing works well enough that I don't find myself ever worrying about it. If a Qt application ever becomes an essential part of my setup I guess I'll have to revisit this.

Takeaways

  1. When macOS scans for networks (as it does when requested by an application, when you click the WiFi icon, or when you request your location) it causes high ping times and dropped packets. According to bug reports, scanning for WiFi networks on other OS's causes the same issues. This is something I didn't really expect, but good to know going forward.

  2. Qt has a bug where instantiating a QNetworkConfigurationManager causes it to scan for WiFi networks every 10 seconds by default. Some applications set the QT_BEARER_POLL_TIMEOUT environment variable themselves to disable this behaviour, but the majority probably have no idea that it's even an issue until a user reports it.

So if you're seeing regular latency spikes, audit what programs you have running. Specifically look for programs built with Qt. Setting the QT_BEARER_POLL_TIMEOUT environment variable to -1 can help with these. Applications that use your location could also be the culprit. Who knew keeping your ping down to an acceptable level was such a minefield...


Proxied access to the Namecheap DNS API

Background

Both this site and my home server use HTTPS certificates provided by Let's Encrypt. Previously I was using the http-01 challenge method, but once wildcard certificates became available, I decided to switch to the dns-01 method for simplicity.

My domains are currently registered through Namecheap which provides an API for modifying DNS records. The only catch is that it can only be accessed via manually-whitelisted IP addresses. Because the server I'm trying to access the API from is assigned a dynamic IP, this restriction was a bit of an issue.

Back in November when I initially made the switch to the DNS-based challenge, I set it up the easy way: I manually added my current IP to the whitelist, added a TODO entry to fix it somehow, and set a reminder scheduled for the week before the cert would expire telling me to update the IP whitelist. Fast-forward to today when I was reminded to update my IP whitelist. Instead of continuing to kick the can down the road, I decided to actually fix the issue.

Setting up a temporary proxy

My home server is connected to the internet though a normal residential connection which is assigned a dynamic IP address. However, the server that I run this website on is configured with a static IP since it's a hosted VPS. By proxying all traffic to the Namecheap API through my VPS, I could add my VPS's static IP to the whitelist and not have to worry about my home IP changing all the time.

SSH is perfect tool for this. The OpenSSH client supports port forwarding using the -D flag. By then setting the HTTP[S]_PROXY environment variables to point to the forwarded port, programs that support those environment variables will transparently forward all their requests through the proxy.

After a bunch of research and testing, I came up with the following script to easily set up the temporary proxy:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/sh
# Source this file to automatically setup and teardown an HTTP* proxy through cmetcalfe.ca
# Use the PROXY_PORT and PROXY_DEST environment variables to customize the proxy

PROXY_PORT="${PROXY_PORT:-11111}"
PROXY_DEST="${PROXY_DEST:-<username>@cmetcalfe.ca}"
PID="$$"

# Teardown the SSH connection when the script exits
trap 'ssh -q -S ".ctrl-socket-$PID" -O exit "$PROXY_DEST"' EXIT

# Set up an SSH tunnel and wait for the port to be forwarded before continuing
if ! ssh -o ExitOnForwardFailure=yes -M -S ".ctrl-socket-$PID" -f -N -D "$PROXY_PORT" "$PROXY_DEST"; then
    echo "Failed to open SSH tunnel, exiting"
    exit 1
fi

# Set environment variables to redirect HTTP* traffic through the proxy
export HTTP_PROXY="socks5://127.0.0.1:$PROXY_PORT"
export HTTPS_PROXY="$HTTP_PROXY"

This script is saved as ~/.config/tempproxy.rc and can be sourced to automatically set up a proxy session and have it be torn down when the script exits.

You'll want to use key-based authentication with an unencrypted private key so that you don't need to type anything to initiate the SSH session. For this reason you'll probably want to create a limited user on the target system that can only really do port forwarding. There's a good post on this here.

Talking to the API through the proxy

To allow programs that use the tempproxy.rc script to talk to the Namecheap API, the IP address of the VPS was added to the whitelist. Now that the proxying issue was taken care of, I just needed to wire up the actual certificate renewal process to use it.

The tool I'm using to talk to the Namecheap DNS API is lexicon. It can handle manipulating the DNS records of a ton of providers and integrates really nicely with my ACME client of choice, dehydrated. Also, because it's using the PyNamecheap Python library, which in turn uses requests under the hood, it will automatically use the HTTP*_PROXY environment variables when making requests.

The only tricky bit is that the base install of lexicon won't automatically pull in the packages required for accessing the Namecheap API. Likewise, requests won't install packages to support SOCKS proxies. To install all the required packages you'll need to run a command like:

1
pip install 'requests[socks]' 'dns-lexicon[namecheap]'

Since lexicon can use environment variables as configuration, I created another small source-able file at ~/.config/lexicon.rc:

1
2
3
4
5
6
7
8
#!/bin/sh
# Sets environment variables for accessing the Namecheap API

# Turn on API access and get an API token here:
# https://ap.www.namecheap.com/settings/tools/apiaccess/
export PROVIDER=namecheap
export LEXICON_NAMECHEAP_USERNAME=<username>
export LEXICON_NAMECHEAP_TOKEN=<api token>

With that in place, the existing automation script that I had previously set up when I switched to using the dns-01 challenge just needed some minor tweaks to source the two new files. I've provided the script below, along with some other useful ones that use the DNS API.

Scripts

do-letsencrypt.sh

This is the main script that handles all the domain renewals. It's called via cron on the first of every month.

Fun fact: The previous http-01 version of this script was a mess - it involved juggling nginx configurations with symlinks, manually opening up the .well-known/acme-challenge endpoint, and some other terrible hacks. This version is much nicer.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/sh
# Generate/renew Let's Encrypt certificates using dehydrated and lexicon

set -e

# Set configuration variables and setup the proxy
source ~/.config/lexicon.rc
source ~/.config/tempproxy.rc

cd /etc/nginx/ssl

# Update the certs.
# Hook script copied verbatim from
# https://github.com/AnalogJ/lexicon/blob/master/examples/dehydrated.default.sh
if ! dehydrated --accept-terms --cron --challenge dns-01 --hook dehydrated.default.sh; then
    echo "Failed to renew certificates"
    exit 1
fi

# Reload the nginx config if it's currently running
if ! systemctl is-active nginx.service > /dev/null; then
    echo "nginx isn't running, not reloading it"
    exit 0
fi

if ! systemctl reload nginx.service; then
    systemctl status nginx.service
    echo "Failed to reload nginx config, check the error log"
    exit 1
fi

dns.sh

This one is really simple - it just augments the normal lexicon CLI interface with the proxy and configuration variables.

1
2
3
4
5
6
7
#!/bin/sh
# A simple wrapper around lexicon.
# Sets up the proxy and configuration, then passes all the arguments to the configured provider

source ~/.config/lexicon.rc
source ~/.config/tempproxy.rc
lexicon "$PROVIDER" "$@"
1
2
3
4
5
$ ./dns.sh list cmetcalfe.ca A
ID       TYPE NAME             CONTENT         TTL
-------- ---- ---------------- --------------- ----
xxxxxxxx A    @.cmetcalfe.ca   xxx.xxx.xxx.xxx 1800
xxxxxxxx A    www.cmetcalfe.ca xxx.xxx.xxx.xxx 1800

dns-update.sh

An updated version of my previous script that uses the API instead of the DynamicDNS service Namecheap provides. Called via cron every 30 minutes.

Update

A previous version of this script didn't try to delete the DNS entry before setting the new one, resulting in multiple IPs being set as the same A record.

This is because the update function of the Namecheap plugin for lexicon is implemented as a 2-step delete and add. When it tries to delete the old record, it looks for one with the same content as the old one. This means that if you update a record with a new IP, it doesn't find the old record to delete first, leaving you with all of your old IPs set as individual A records.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/bin/sh
# Check the server's current IP against the IP listed in its DNS entry.
# Set the current IP if they differ.

set -e

resolve() {
    line=$(drill "$1" @resolver1.opendns.com 2> /dev/null | sed '/;;.*$/d;/^\s*$/d' | grep "$1")
    echo "$line" | head -1 | cut -f5
}

dns=$(resolve <subdomain>.cmetcalfe.ca)
curr=$(resolve myip.opendns.com)
if [ "$dns" != "$curr" ]; then
    source ~/.config/lexicon.rc
    source ~/.config/tempproxy.rc
    if ! lexicon "$PROVIDER" delete cmetcalfe.ca A --name "<subdomain>.cmetcalfe.ca"; then
        echo "Failed to delete old DNS record, not setting the new one."
        exit 1
    fi
    if lexicon "$PROVIDER" update cmetcalfe.ca A --name "<subdomain>.cmetcalfe.ca" --content "$curr" --ttl=900; then
        echo "Server DNS record updated ($dns -> $curr)"
    else
        echo "Server DNS record update FAILED (tried $dns -> $curr)"
    fi
fi

© Carey Metcalfe. Built using Pelican. Theme is subtle by Carey Metcalfe. Based on svbhack by Giulio Fidente.