Posts | Tags | Archive

Forwarding spam with Gmail

With multiple Gmail accounts, it's often easier to forward all emails to a single main account that you actually check. This can be accomplished by setting up a forwarding account in Gmail's settings.

However, something that isn't immediately obvious is that messages that are considered spam are never forwarded. While Gmail's spam detection is extremely good, it's not perfect. There have been times where emails that I actually wanted have ended up in the spam folder and weren't forwarded to my main account.

To fix this, we want to tell Gmail to not flag anything as spam so it will be forwarded along.

Note

This will not cause spam to appear in your inbox. It bypasses the spam filters so the email is forwarded properly, but the account it's forwarded to will also flag the email as spam. The end result is that that the email will end up in the spam folder of your main account (where you can actually look at it) instead of not being forwarded from the original account at all.

We can do this with a filter that matches is:spam and applies the option "Never send it to Spam". Unfortunately, Gmail's interface makes this task much harder than it needs to be.

  1. Open Gmail and do a search for is:spam. Notice that Gmail autocorrects it to in:spam.
  2. Click the dropdown arrow on the right side of the search box to bring down the advanced options and click "Create filter with this search". Hit "OK" on the warning box that pops up. Notice that the filter has been autocorrected again to label:spam.
  3. In the URL you should see something like #create-filter/has=label%3Aspam. Change label to is in the URL and hit enter. It should modify the text in the box without changing anything else.
  4. Check the "Never send it to Spam" checkbox and hit "Create filter". You'll see the text get autocorrected to in:spam again, but if you check in Settings -> Filters and Blocked Addresses you should see the correct is:spam filter.

Bank of Canada currency conversion rates in the terminal

Preface

In this entry I'll use two command line tools (curl and jq) to request currency conversion rates from the Bank of Canada. Now obviously getting the same data through the normal web interface is possible as well. However, doing it this way returns the data in a machine-readable form, allowing it to be customized or used as input into other programs.

The basic flow will be request the data using curl, then pipe it to jq for processing.

Note

This article will request the USD/CAD exchange rate from 2017-01-01 to 2017-01-10. It should be fairly obvious how to switch this for any other currencies or date ranges. See the Appendix for more information and caveats with the dates and the conversion codes.

We'll go from seeing the raw data the server returns to something that we can more easily use. To skip to the final result, click here.

Update

The URLs for the data were originally reverse-engineered from the website. They have since been turned into a supported API. See the API documentation for more details.

Lets go!

Hitting the server without any processing will give the following result:

1
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
"terms":{
    "url": "http://www.bankofcanada.ca/terms/"
},
"seriesDetail":{
"FXUSDCAD":{"label":"USD/CAD","description":"US dollar to Canadian dollar daily exchange rate"}
},
"observations":[
{"d":"2017-01-02","FXUSDCAD":{"e":-64}},
{"d":"2017-01-03","FXUSDCAD":{"v":1.3435}},
{"d":"2017-01-04","FXUSDCAD":{"v":1.3315}},
{"d":"2017-01-05","FXUSDCAD":{"v":1.3244}},
{"d":"2017-01-06","FXUSDCAD":{"v":1.3214}},
{"d":"2017-01-09","FXUSDCAD":{"v":1.3240}},
{"d":"2017-01-10","FXUSDCAD":{"v":1.3213}}
]
}

To get the information we want, we need to iterate over the elements in the observations list and map the date (d) to the conversion rate (FXUSDCAD.v). Since this will create a list of single-element dictionaries, we'll also need to "add" (merge) them together.

To do this with jq, it looks like this:

1
2
3
4
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add
'
1
2
3
4
5
6
7
8
9
{
  "2017-01-02": null,
  "2017-01-03": 1.3435,
  "2017-01-04": 1.3315,
  "2017-01-05": 1.3244,
  "2017-01-06": 1.3214,
  "2017-01-09": 1.324,
  "2017-01-10": 1.3213
}

That first null entry where there was no data is going to cause problems later, so let's just delete it using the del function:

1
2
3
4
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add | del(.[] | nulls)
'
1
2
3
4
5
6
7
8
{
  "2017-01-03": 1.3435,
  "2017-01-04": 1.3315,
  "2017-01-05": 1.3244,
  "2017-01-06": 1.3214,
  "2017-01-09": 1.324,
  "2017-01-10": 1.3213
}

Now this may be good enough for most purposes, but it would be useful to compute some other statistics from this data. For example, we want to know the average exchange rate for the date range. Now that we have the data in an easy to use format, computing an average with jq is just a matter of passing the values to an add / length filter. For example:

1
2
3
4
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add | del(.[] | nulls) | add / length
'
1
1.327683333333333

We can now construct an object that has both the average, as well as all the data. Putting it all together we get:

1
2
3
4
5
6
7
8
curl -s "https://www.bankofcanada.ca/valet/observations/FXUSDCAD/json?start_date=2017-01-01&end_date=2017-01-10" |\
jq '
    [.observations[] | {(.d) : .FXUSDCAD.v}] | add | del(.[] | nulls) |
    {
        "average": (add / length),
        "values": .
    }
'
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
  "average": 1.327683333333333,
  "values": {
    "2017-01-03": 1.3435,
    "2017-01-04": 1.3315,
    "2017-01-05": 1.3244,
    "2017-01-06": 1.3214,
    "2017-01-09": 1.324,
    "2017-01-10": 1.3213
  }
}

And that's it! Hopefully you found this useful, if not for it's stated purpose of retrieving exchange rates, then at least for getting more familiar with jq. It's pretty much the best tool out there when it comes to manipulating JSON on the command line.

Appendix

Date ranges

  • The server will only ever return data from 2017-01-03 and onwards. If the start range is before that, it doesn't return an error, it just doesn't return any data for those dates.
  • If the end date isn't specified, the server assumes the current date.
  • There doesn't seem to be any limit on the amount of data retrieved

Currency codes

A list of codes can be easily scraped from the lookup page. The command and its result are below for reference.

1
curl -s "https://www.bankofcanada.ca/rates/exchange/daily-exchange-rates-lookup/" | grep -Eo '"FX......".*<'
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
"FXAUDCAD">Australian dollar<
"FXBRLCAD">Brazilian real<
"FXCNYCAD">Chinese renminbi<
"FXEURCAD">European euro<
"FXHKDCAD">Hong Kong dollar<
"FXINRCAD">Indian rupee<
"FXIDRCAD">Indonesian rupiah<
"FXJPYCAD">Japanese yen<
"FXMYRCAD">Malaysian ringgit<
"FXMXNCAD">Mexican peso<
"FXNZDCAD">New Zealand dollar<
"FXNOKCAD">Norwegian krone<
"FXPENCAD">Peruvian new sol<
"FXRUBCAD">Russian ruble<
"FXSARCAD">Saudi riyal<
"FXSGDCAD">Singapore dollar<
"FXZARCAD">South African rand<
"FXKRWCAD">South Korean won<
"FXSEKCAD">Swedish krona<
"FXCHFCAD">Swiss franc<
"FXTWDCAD">Taiwanese dollar<
"FXTHBCAD">Thai baht<
"FXTRYCAD">Turkish lira<
"FXGBPCAD">UK pound sterling<
"FXUSDCAD">US dollar<
"FXVNDCAD">Vietnamese dong<

Adding HTTPS support to this site

Yesterday I finally ticked something off my todo list that'd been on there for a while - I added proper HTTPS support to this site, powered by Let's Encrypt.

The Qualys SSL Labs SSL Server Test now gives this site an A+ rating. You can see all the certs (past and present) for it on crt.sh, an online certificate search tool.

For those interested in how things work:

Let's Encrypt is a "free, automated, and open" certificate authority. They provide certificates via an automatable process that are valid for 90 days.

Exactly how I got it all working can be seen in my website repo, where this entire site and it's deployment scripts are kept, but here's the gist of it:

I decided to use dehydrated instead of the official Let's Encrypt client to generate the certificates. dehydrated is ~1000 lines of bash and only requires things like openssl, curl, and a few other programs that any UNIX system should already have. It takes a config file and a list of domains. Once those are in place running it from cron is easy.

The ACME protocol that Let's Encrypt uses to verify domain ownership requires the web server to respond to certain requests. This check makes sure that only someone with control over the domain can generate a cert for it. To allow those requests to return the files generated by dehydrated, a location rule in the Nginx config was added.

Initially there were problems with the first time generating certs. Since the certs didn't exist yet, Nginx was failing to start (and therefore causing the domain validation to fail). The solution was to make two sets of configs. An HTTP set, and an HTTPS set. During the renewal process, the HTTPS configs are tried, and if running Nginx fails, it falls back to the HTTP set. Switching between HTTP and HTTPS configs is done by modifying a symlink.

When all set up, this makes both generating a cert for the first time and renewing a cert as easy as deploying the site. For continual, automated renewals, a monthly cron job was added. With the expiry of certs being 90 days, this should be frequent enough that the certs should never expire.

© Carey Metcalfe. Built using Pelican. Theme is subtle by Carey Metcalfe. Based on svbhack by Giulio Fidente.