Category Archives: Uncategorized

Li-Ion to 3.3 V Buck-boost Converter

Looking at powering an ESP32 from Li-lon batteries, specifically NCR18650B (3400mAh) I tried to build the circuit in Random Nerd Tutorial’s Power ESP32/ESP8266 with Solar Panels. There a MCP1700-3302E LDO regulator (PDF warning) is suggested, but when using the NodeMCU ESP-32S it could not start up Wi-Fi reliably. Every now and then it would work, but my guess is the 250mA limit was not quite enough to satisfy a current spike as the Wi-Fi turns on.

It could be that my particular board was deficient in some way, but I wanted more flexibility on the input voltage end of things anyways (e.g. boost when voltage is too low). When looking around, two chips stood out to me, the TPS63020 and TPS63060 (PDFs),

TPS63020 TPS63060
Input voltage 1.8 V – 5.5 V 2.5 V – 12 V
Output voltage 1.2 V – 5.5 V 2.5 V – 8 V
Output current @3.3 V (VIN> 2.5 V): 2A @5 V (VIN <10 V): 2 A in Buck Mode
@5 V (VIN>4 V): 1.3 A in Boost Mode
Quiescent current 25 μA < 30 μA
Operating Temperature (°C) -40 to 85 -40 to 85

They don’t seem to be sold on boards particularly commonly, so I ordered a couple of whatever I could find. I ordered two boards with the TPS63020 on them (this one, for ~13.37$ CAD, and this one, for ~7.28$ CAD) and one board with the TPS63060 on it (this one, for ~5.11$ CAD).

All of them are slightly more expensive than I was hoping, but the specs are much more in line with what I wanted compared to the LDO regulator. I’d love to find a TPS63021 (the fixed 3.3 V output chip) to play with, but no luck so far.

Visualizing 3D printer Z axis offset

This is the z-height calibration model from the Prusa forum (source below). While the number is somewhat arbitrary (as in the values will mean nothing for a different printer) it represents the distance from the PINDA probe to the print bed, where a higher number means the nozzle should be closer to the bed. These images spans from “not even making a layer” to “too close to extrude”. This is printed in ~3 year old red M3D 3D Ink PLA. I was having trouble with bed adhesion, so I figured it was worth it to go all out and give this a shot. All 4 prints are identical everything except for z-offset. Source: https://shop.prusa3d.com/forum/assembly-and-first-prints-troubleshooting-f62/life-adjust-z-my-way-t2981.html

1 - Up4V1Rh

2 - iCFrF3h

A bit of an angle shot to hopefully show off some of the texture differences. You can kind of see that at 775 it’s starting to look a bit off, and at 925 it’s similarly starting to look off.

 

Dell 9550 battery replacement

I was a little unhappy with how my Dell 9550 had aged. It’s 2.5 years old, but it was really seeming like it was rough. The trackpad was being finicky, it was having a hard time registering right clicks. Additionally, it was getting super hot, having trouble sleeping, and the battery life was almost nonexistent. Dell had shipped me a new battery a few months earlier, but I never got around to swapping it out.

I sat down to finally do it and looked for the instructions. Turns out the trackpad issues were exactly why Dell sent out the new battery – the battery swells up and presses on the underside of the trackpad. I opened up the back of the laptop and the heat issues immediately became clear…

20180804_124712.jpg

First impression – it’s super gross. Have you noticed the trouble spot though?

20180804_124712 - Copy.jpg

That’s where the air is supposed to go in…

computer-filth.jpg

Pulled out as much as I can get, pretty damn gross…

Turns out that dust clogging fans causes overheating issues, which cause both performance problems and does terrible things to battery life. No idea how much of the difference was from replacing the battery and how much was from unclogging the fans, but it’s night and day. I haven’t heard the fans come on since and the battery life has moved from ~45 minutes to >6 hours. Worthwhile to not be lazy. Thanks Dell, it was nice of you to send out the battery unsolicited!

UFW, OpenVPN, forwarding traffic and not breaking everything

I’ve previously written about using OpenVPN to escape Xplornet’s double NAT. Every now and then I’ll set up a new server (following the steps there) and inevitably run into some firewall configuration problem. I’ve really never taken the time to understand how to use iptables. I understand that they’re theoretically simple, but amazingly I always have a hard time with them. To that end, I’ve used ufw to try and help.

The number one piece of advice for securing anything connected to the internet is to reduce the attack surface. Great:

sudo ufw default deny incoming
sudo ufw allow ssh

and now nothing works. Attack surface minimized!

Before going to far, I use the nuclear option for new or new-ish servers to ensure I know what I’m dealing with (NOTE: this leaves your server WIDE open, don’t stop here!):

// Reset ufw and disable
sudo ufw reset

// Flush all iptables rules
sudo iptables -F
sudo iptables -X
sudo iptables -t nat -F
sudo iptables -t nat -X
sudo iptables -t mangle -F
sudo iptables -t mangle -X
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT

This leaves me with:

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
 target prot opt source destination

Chain OUTPUT (policy ACCEPT)
 target prot opt source destination
$ sudo ufw status verbose
 Status: inactive

Awesome. Clean slate! Starting from the beginning again:

$ sudo ufw default deny incoming
Default incoming policy changed to 'deny'
(be sure to update your rules accordingly)
$ sudo ufw allow ssh
Rules updated
$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip
To       Action     From
--       ------     ----
22/tcp   ALLOW IN   Anywhere

For the whole OpenVPN set up to work, the VPN client needs to actually be able to connect to the server. We’ll need to allow traffic on 1194 (or whatever port you’ve configured OpenVPN to use).

$ sudo ufw allow 1194
Rule added

You’ll also need to allow traffic to whatever port it is you’re forwarding. For example, if I want port 3000 to be what I’m exposing to the public:

$ sudo ufw allow 3000
Rule added

Leaving us with:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To       Action     From
--       ------     ----
22/tcp   ALLOW IN   Anywhere
1194     ALLOW IN   Anywhere
3000     ALLOW IN   Anywhere

That’s about it for the more intuitive parts. The server is relatively locked down, although if you are using a fixed VPN client it may be worthwhile to white-list that single address. To allow the forwarding that the OpenVPN set up relies on, we’ll need to change the ufw default forward policy. Edit /etc/default/ufw and change the value of DEFAULT_OUTPUT_POLICY from DROP to ACCEPT:

$ sudo nano /etc/default/ufw
...

# Set the default output policy to ACCEPT, DROP, or REJECT. Please note that if
# you change this you will most likely want to adjust your rules.
DEFAULT_OUTPUT_POLICY="ACCEPT"

Then disable and re-enable ufw to update it:

$ sudo ufw disable && sudo ufw enable

Finally, adding the iptables rules used in the previous post (I’m sure there’s a way to do this with ufw, I just don’t know it):

$ sudo iptables -t nat -A PREROUTING -d 183.214.158.198 -p tcp --dport 3000 -j DNAT --to-dest 10.8.0.2:3000
$ sudo iptables -t nat -A POSTROUTING -d 10.8.0.2 -p tcp --dport 3000 -j SNAT --to-source 10.8.0.1

Et voilà! A relatively locked down server that plays nicely with OpenVPN and forwarding traffic.

Replacing middle baffle support in Osburn 1600

We have an old Osburn 1600 freestanding stove (like these). Our middle baffle support rotted out. That’d be this piece in the stove (this isn’t exactly the model we have, but close enough):

StoveDiagram

Or how it looked in real life:

20161202_171451(1).jpg

It was holding the fire bricks up but it seemed like it wouldn’t be doing so for long. A couple money shots:

I contacted sbi-international.com and they helped me ensure I had the exact part number. $56.16 (after taxes) and a couple days later the piece showed up. Shockingly heavy, it’s just two pieces of steel and 6 welds. If I were more ambitious, I would have tried to weld it myself, but I already have too many projects on the go.

20171212_204614

Replacing it was actually really straightforward and easy. Nothing had to be disassembled,all the pieces are simply leaning on one another. There are 3 fire bricks on each side. There’s a bunch of space above the bricks (the smoke chamber), so pushing up the center brick is about as easy as can be. After that, the side bricks are trivial.

The remaining 5 bricks follow quickly, then the old middle baffle support basically fell out. Angle the new middle baffle support, put the bricks back in, and it’s all done.

20171214_233632.jpgTook about 15 minutes and the fireplace is good to go!

Xplornet double NAT: VPN edition

Previously, I wrote about using a reverse SSH tunnel to escape a double NAT (specifically, the one provided by Xplornet). Without looking into why (maybe poor, intermittent connection and particularly awful uplink), my previous solution was not stable. Even with autossh, the connection kept dropping and not picking back up. I’m exclusively accessing this remotely, so when I notice the service being down I’m in pretty much the worst position to fix it.

Grab a public server with a static IP – for example a 5$/month Linode or Droplet. I’ve seen reference to cheaper international products, but I have no experience with them.

Linode1024

If you’ve picked Linode:

  • deploy an Ubuntu image
  • boot it up
  • SSH to the machine
  • do regular server stuff – make sure it’s up to date, generally read over Digital Ocean’s guide here for inspiration

Set up OpenVPN server

On the public computer (OpenVPN server/cloud instance):

The first time I did this, I set up OpenVPN myself. It’s not awful, there are some pretty comprehensive guides (like this one), but it definitely sucks enough to look for an alternative. Googling around shows two compelling public scripts – Nyr’s openvpn-install and Angristan’s version based off Nyr’s. Looking over the two, I ended up picking Angristan’s version without all that much consideration.

SSH to the machine and execute the script on your pubic server to set up the certificates and keys for your client. The defaults for the script all seem sensible – you don’t have to feel bad if you just mash enter until the name prompt comes up, then give your client a reasonable name

$ wget https://raw.githubusercontent.com/Angristan/OpenVPN-install/master/openvpn-install.sh
$ chmod +x openvpn-install.sh
$ ./openvpn-install.sh

You should notice at the end of the script execution a line that looks something like this:

...
Finished!

Your client config is available at /root/unifi-video-server.ovpn
If you want to add more clients, you simply need to run this script another time!

Take note of the location of your .ovpn file, as you’ll need it for the next step.

Set up OpenVPN client

On the private computer (machine that’s behind the double NAT):

On your client machine, get the OVPN configuration file that was generated from the previous step. scp is likely the easiest way to do this. From the client machine, you can retrieve the file like:

scp {server user}@{server host}:{remote path to ovpn} {local path}

For example:

$ scp root@37.48.80.202:/root/unifi-video-server.ovpn .

This will copy the file to the current directory on the machine. An extremely quick sanity check to ensure you can connect:

sudo openvpn unifi-video-server.ovpn

You should see:

Initialization Sequence Completed

once you do, you can ctrl + c your way out. If this wasn’t successful… something has gone wrong and you should fix it.

To make sure your client connects on start up:

  • rename your .ovpn file to be a .conf file
  • move the .conf file to /etc/ovpn
  • Edit /etc/default/openvpn to ensure AUTOSTART is configured to start your connection

At this stage, you have an OpenVPN server set up and an OpenVPN client that automatically connects to the server. All that’s left is to do the internet part.

Set up server traffic forwarding to client

On the public computer (OpenVPN server/cloud instance):

What we want now is to forward traffic that hits a particular port on the public server to the private computer. Not only that, but you want the private computer to think the traffic is coming from the public server, so it doesn’t respond directly to whoever sent the internet request.

First things first, toggle the server to allow forwarding traffic (if you don’t do this, you’ll end up insanely frustrated and convinced iptables is the devil:

sysctl -w net.ipv4.ip_forward=1

We need two pieces of information:

  • the public WAN (internet) IP address of the server
  • the virtual address of the OpenVPN client

Finding the public address can be done with:

$ curl ipinfo.io/ip
37.48.80.202

The virtual address of the OpenVPN client can be found in the OpenVPN status log with the client connected (see above for how to set up the connection for now). The log seems like it’s either in either /etc/openvpn/openvpn-status.log or /etc/openvpn/openvpn.log

$ cat /etc/openvpn/openvpn.log
OpenVPN CLIENT LIST
Updated,Sun Nov 5 01:37:33 2017
Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since
unifi-video-server,37.48.80.202:49014,39837,52165,Sun Nov 5 01:02:05 2017
ROUTING TABLE
Virtual Address,Common Name,Real Address,Last Ref
10.8.0.2,unifi-video-server,37.48.80.202:49014,Sun Nov 5 01:36:54 2017
GLOBAL STATS
Max bcast/mcast queue length,1
END

Now we’ll need a source routing NAT rule and a destination routing NAT rule for every port that is going to be forwarded. They’ll look something like this:

iptables -t nat -A PREROUTING -d {server WAN IP} -p tcp --dport {port} -j DNAT --to-dest {client virtual address}:{port}
iptables -t nat -A POSTROUTING -d {client virtual address} -p tcp --dport {port} -j SNAT --to-source {server virtual address}

Practically speaking, with the following:

  • public server whose Internet accessible IP address is 37.48.80.202
  • public server whose OpenVPN virtual address is 10.8.0.1
  • private computer whose OpenVPN virtual address is 10.8.0.2
  • Forwarding port 7080 on the public server to port 7080 on the private computer

It’d look something like this:

iptables -t nat -A PREROUTING -d 37.48.80.202 -p tcp --dport 7080 -j DNAT --to-dest 10.8.0.2:7080
iptables -t nat -A POSTROUTING -d 10.8.0.2 -p tcp --dport 7080 -j SNAT --to-source 10.8.0.1

Now the only thing left is to make sure the routing rules persist across reboots.

$ sudo apt install iptables-persistent
$ sudo netfilter-persistent save
$ sudo netfilter-persistent reload

And that’s it. In my experience this seems to be both a more robust solution to the double NAT problem, and uses tools in a more conventional way. I visited 37.48.80.202:7080, and (subject to the awful uplink speed from Xplornet), my page loaded!

Git: Determine which branches have been merged into any of a set of branches

Here’s my implementation (note that I’m neither a git expert nor a shell scripting expert):

1. Determine the set of branches that, when another branch has been merged into it, make up the modified meaning of a branch having been merged
2. Determine a pattern that narrows the list of all branches to only the branches in the previous set. For me, it was origin/release
3. Do everything else:

git branch --remote --list origin/release/* --format="%(objectname)" | xargs -n1 -I {} git branch --remote --merged {}

 


What use is this?

Git has functionality to determine which branches are already merged into a specified branch (see git branch documentation and the git branch --merged flag in particular). This works well if you’re looking at a single branch at a time. The product I work on during the day has many developers working on multiple different releases at any one time – usually ~5 versions of the product are deployed and covered by service level agreements that ensure they’re continually supported. This is the reality for a great many enterprise applications deployed on customer infrastructure – continuous deployment just isn’t a thing without massive investment from all involved.

I found that developers were not good at cleaning up feature branches after they’ve merged them into their respective release stream. As a first step, I wanted to understand how many branches were actually merged, where “merged” is defined as “merged into any of the various release branches”. I’m suspicious that git has this functionality somewhere, but I wasn’t able to find it.

How to run TypeScript in the browser

Short answer

You can’t, that’s not a thing (at least so far).

Longer answer:

By building an overly complicated front-end tool chain! Seriously, it’s crazy how much this is not out-of-box. Preface, as always, is that I don’t really know what I’m doing, so I certainly wouldn’t recommend this for any real projects – I’m just using it for experimentation.

Tools needed

  • NodeJS
    • JavaScript run-time environment
    • Needed to actually run JavaScript and all the tooling
  • TypeScript
    • Typed language that compiles to JavaScript, comes with a compiler
    • This is what we want to write!
  • Babel
    • JavaScript to JavaScript compiler with ability to polyfill new features into older versions
    • Needed to convert the version of JavaScript we’re writing to a version browsers can execute
  • Webpack
    • Asset bundler – decouples your development project structure from the deliverable
    • Not strictly needed, but extremely useful for any “real” project

Steps

It’ll look something like this when all done:

TypeScriptProcess(1)

  1. Write code in TypeScript
  2. Use TypeScript compiler to compile TypeScript into a recent version of JavaScript, without providing backwards compatibility or browser polyfilling
  3. Use Babel compiler to turn recent version of JavaScript, which browsers can’t natively execute, into a version browsers can execute
  4. Use Webpack to grab your assortment of JavaScript files, organized however you want for development, and create a more easily deliverable “bundle” of everything

From the beginning, that means:

    1. Install NodeJS (use the latest version unless you have reason to do otherwise)
    2. Create your project
$ yarn init
yarn init v0.27.5
question name (typescript-front-end-seed): 
question version (1.0.0): 
question description: Seed project for TypeScript in the browser
question entry point (index.js): 
question repository url (https://github.com/tobymurray/typescript-front-end-seed.git): 
question author (Toby Murray <murray.toby+github@gmail.com>): 
question license (MIT): 
success Saved package.json
Done in 34.38s.
    1. Add all the dependencies we’ll need – TypeScript, Babel (note Babel by itself doesn’t really do anything, you need to include a plugin), and Webpack
$ yarn add -D typescript babel-cli babel-preset-env webpack
    1. Create whatever project structure you want. I’ll do something like src/ for TypeScript code, and public/ for static files (e.g. HTML).
$ mkdir src public
$ touch public/index.html src/index.ts
    1. Create the configuration files you’ll need for all the tools – tsconfig.json for TypeScript, .babelrc for Babel and webpack.config.js for Webpack
$ touch tsconfig.json .babelrc webpack.config.js

Tool configuration

Now comes either the interesting part or the awful part, depending on your perspective – configuring all the tools to do what we want! To keep things clear, we’ll place the output of the TypeScript compiler into a tsc folder, then we’ll feed that as input into Babel. The output of Babel will go into a babel folder. We’ll then use Webpack to consume the contents of the Babel folder and put it in a dist folder (this is what we’d actually serve up to a client browser).

TypeScript

Keeping this as simple as possible (there are plenty of options to play with), the two big decisions are what to use as the target and module version to use. Fortunately, we don’t really have to care too much, it just has to be consumable by Babel. To get all the features possible (mmm, delicious features), we can target e.g. ES2017, and use commonjs.

{
  "compilerOptions": {
    "target": "es2017",      /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', or 'ESNEXT'. */
    "module": "commonjs",    /* Specify module code generation: 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
    "rootDir": "./src",      /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
    "outDir": "./build-tsc", /* Redirect output structure to the directory. */
  }
}

Babel

Again, doing as little as possible, we’ll tell Babel to do whatever it needs to do to target apparently 95% of user’s browsers. For some reason, Babel does not support setting the output directory in the configuration file (see options here), it has to be passed as an argument to the invocation of Babel.

{
  "presets": [
    ["env", {
      "targets": {
        "browsers": ["last 2 versions", "safari >= 7"]
      }
    }]
  ]
}

Webpack

Likewise, for the start Webpack doesn’t have to be that complicated. We’ll include source maps here, don’t feel obliged to do so though.

const path = require('path');

module.exports = {
  devtool: "source-map",
  entry: './build-babel/index.js',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist')
  }
};

package.json

To avoid having to remember anything, a few scripts in package.json can be useful. Breaking them out so it’s clear what step is doing what, it could look like this:

"scripts": {
  "clean": "yarn run clean-build-steps && rm -rf dist",
  "tsc": "./node_modules/.bin/tsc",
  "babel": "./node_modules/.bin/babel build-tsc --out-dir build-babel --source-maps",
  "webpack": "webpack && cp public/* dist",
  "clean-build-steps": "rm -rf build-tsc build-babel",
  "build": "yarn run clean && yarn run tsc && yarn run babel && yarn run webpack && yarn run clean-build-steps"
}

Build

Running yarn build (after the initial install) will:

  1. Clean anything from previous executions of the script
    1. This includes any leftover build artifacts, as well as the dist directory
  2. Use the TypeScript compiler to take everything from the src directory, transpile it to ES2017 JavaScript, and output it into the build-tsc directory
  3. Use Babel to convert everything in the build-tsc directory from ES2017 to ES2015 and output it into build-babel
  4. Use Webpack:
    1. Look in the build-babel folder
    2. Find index.js
    3. Parse index.js as an entrypoint, and resolve dependencies
    4. Add everything needed into one big bundle.js
  5. Create the “deployable” directory
    1. Copy the static HTML into the dist directory
    2. Copy the bundle.js into the dist directory

Serve

With something like http-server and serving the dist directory, we can see the product of our work!

$ http-server dist
Starting up http-server, serving dist
Available on:
  http://127.0.0.1:8080
  http://10.0.2.15:8080
  http://172.17.0.1:8080
Hit CTRL-C to stop the server

See the GitHub repository here and the deployed example here.

Automatically move downloaded torrents to remote machine

Setting up users for file transfer

The Transmission installation creates a debian-transmission user and group to run the daemon. It’s done this way to limit the risks if someone gains access to the user (through a Transmission bug, for example). This means the debian-transmission user is going to be the one executing the post-download script. The only way I’m aware of for transferring files to another machine while maintaining the restricted nature of the user is to create a similarly minimally priviledged user on the remote system, as the recipient of the files.

Assuming you’re using debian-transmission and you’ve created a corresponding user on the other machine – we’ll call them remote-user, you’ll want to set up an SSH key pair with the remote machine. For me, that was 192.168.1.20

$ sudo mkdir /var/lib/transmission-daemon/.ssh
$ sudo ssh-keygen -f /var/lib/transmission-daemon/.ssh/id_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/transmission-daemon/.ssh/id_rsa.
Your public key has been saved in /var/lib/transmission-daemon/.ssh/id_rsa.pub.
...
$ ssh-copy-id -i /var/lib/transmission-daemon/.ssh/id_rsa remote-user@192.168.1.20
...

Now, you need to do a little dance to get the known_hosts folder to be populated. I don’t know of a better way to do this, but here’s what I did:

$ sudo su
# ssh-keyscan 192.168.1.80 >>/var/lib/transmission-daemon/.ssh/known_hosts
...
# exit

Then change the permissions so that debian-transmission owns everything.

$ sudo chown -R debian-transmission:debian-transmission /var/lib/transmission-daemon/

Post-torrent-download script

Create a script, and put it anywhere you’d like. I put mine in /usr/local/bin/after-torrent-downloaded.sh

$ sudo touch /usr/local/bin/after-torrent-downloaded.sh
$ sudo chown debian-transmission:debian-transmission after-torrent-downloaded.sh
$ sudo chmod +x after-torrent-downloaded.sh

For our purposes, there are two important environment variables transmission exposes (see https://trac.transmissionbt.com/wiki/Scripts) TR_TORRENT_DIR – the absolute directory path and TR_TORRENT_NAME – the torrent’s name. With all this done, the script is completely trivial. This is mine:

USERNAME=remote-user
HOST=192.168.1.20
TARGET_DIRECTORY=/home/remote-user/files

scp -r "$TR_TORRENT_DIR/$TR_TORRENT_NAME" $USERNAME@$HOST:"$TARGET_DIRECTORY"

Note: This relies on the target directory (/home/remote-user/files) already existing – if it doesn’t, make it.

Transmission configuration

Note: The client should be closed before making changes, otherwise settings will be reverted to it’s previous state.

First thing first, find out where the configuration file you’re going to be change is, see the transmission wiki. For me it was in the /var/lib/transmission-daemon/.config folder. In terms of changes to be made, there’s another wiki page. The settings.json is the one we need, and there are only two values we need to worry about.

$ sudo nano /var/lib/transmission-daemon/.config/transmission-daemon/settings.json

Change "script-torrent-done-enabled": false, to "script-torrent-done-enabled": true,

Change "script-torrent-done-filename": "" to "script-torrent-done-filename": "/usr/local/bin/after-torrent-downloaded.sh" or whatever the path is to your script.

Save settings.json and make Transmission respect your changes with:

$ killall -HUP transmission-daemon

That’s all there is to it!

Try downloading a torrent, and when it’s completed take a look at the Transmission logs:

$ sudo journalctl -u transmission-daemon.service

Every time a torrent finishes, it should be copied to the configured remote server.

Email with Gmail, NodeJS, and OAuth2

If you look around for examples of how to send an email via Gmail with NodeJS, they generally end up mentioning you should flip the toggle to Allow less secure apps:

Screenshot of Gmail Less secure apps setting page

This doesn’t seem like a good idea – I mean it SAYS “less secure”. I looked around at the documentation, and while Google has tons of documentation, I found it a bit overwhelming. As promised, the NodeJS quickstart is a great place. It shows how to set up a client to authenticate with Google in the “more secure” fashion. I’ll go through that quickstart here, with a couple tweaks to send email.

First things first, install the necessary dependencies:

yarn add google-auth-library googleapis js-base64

Then steal most of the quickstart.js, swapping out enough to send an email. Note that this is my first time ever interacting with the Gmail API, so while this worked to send an email for me, no guarantees…

Pull in all the dependencies:

const fs = require('fs');
const readline = require('readline');
const google = require('googleapis');
const googleAuth = require('google-auth-library');
const Base64 = require('js-base64').Base64;

Choose the appropriate Auth Scopes for what you’re trying to accomplish:

const SCOPES = ['https://mail.google.com/',
  'https://www.googleapis.com/auth/gmail.modify',
  'https://www.googleapis.com/auth/gmail.compose',
  'https://www.googleapis.com/auth/gmail.send'
];

Define where you’re going to store the auth token once you get it:

const TOKEN_DIR = (process.env.HOME || process.env.HOMEPATH ||
  process.env.USERPROFILE) + '/.credentials/';
const TOKEN_PATH = TOKEN_DIR + 'gmail-nodejs-quickstart.json';

First, we’ll want to read the client secret that was created in the manual set up phase.

/**
 * Read the contents of the client secret JSON file
 * 
 * @param {String} filename - name of the file containing the client secrets
 */
function readClientSecret(filename) {
  return new Promise((resolve, reject) => {
    fs.readFile(filename, (err, content) => {
      if (err) {
        return reject('Error loading client secret from ' + filename +
          ' due to ' + err);
      }
      return resolve(content);
    });
  });
}

Then after parsing that JSON file, we’ll want to build the Google’s OAuth2 client, as they’re nice and provide one for us.

/**
 * Create an OAuth2 client with the given credentials
 *
 * @param {Object} credentials The authorization client credentials.
 */
function authorize(credentials) {
  let clientSecret = credentials.installed.client_secret;
  let clientId = credentials.installed.client_id;
  let redirectUrl = credentials.installed.redirect_uris[0];
  let auth = new googleAuth();
  let oauth2Client = new auth.OAuth2(clientId, clientSecret, redirectUrl);

  return new Promise((resolve, reject) => {
    // Try reading the existing token
    fs.readFile(TOKEN_PATH, function (err, token) {
      if (err) {
        // If there isn't an existing token, get a new one
        resolve(getNewToken(oauth2Client));
      } else {
        oauth2Client.credentials = JSON.parse(token);
        resolve(oauth2Client);
      }
    });
  });
}

If this is the first time executing the program, or you’ve deleted the cached token, you’ll need to get a new one.

/**
 * Get and store new token after prompting for user authorization, then return
 * authorized OAuth2 client.
 *
 * @param {google.auth.OAuth2} oauth2Client The OAuth2 client to get token for.
 */
function getNewToken(oauth2Client) {
  let authUrl = oauth2Client.generateAuthUrl({
    access_type: 'offline',
    scope: SCOPES
  });

  console.log('Authorize this app by visiting this url: ', authUrl);

  let readlineInterface = readline.createInterface({
    input: process.stdin,
    output: process.stdout
  });

  return new Promise((resolve, reject) => {
    readlineInterface.question('Enter the code from that page here: ',
      (code) => {
        readlineInterface.close();
        oauth2Client.getToken(code, (err, token) => {
          if (err) {
            return reject('Error while trying to retrieve access token', err);
          }

          oauth2Client.credentials = token;
          storeToken(token);
          return resolve(oauth2Client);
        });
      });
  });
}

To avoid having to do this on every call, it makes sense to write it out to the disk.

/**
 * Store token to disk be used in later program executions.
 *
 * @param {Object} token The token to store to disk.
 */
function storeToken(token) {
  try {
    fs.mkdirSync(TOKEN_DIR);
  } catch (err) {
    if (err.code != 'EEXIST') {
      throw err;
    }
  }
  fs.writeFile(TOKEN_PATH, JSON.stringify(token));
  console.log('Token stored to ' + TOKEN_PATH);
}

At this point, our OAuth2 client is authenticated and ready to role! If we’ve set up the Auth Scopes properly, our client should also be authorized to do whatever we want it to do. There are a handful of libraries that make this easier, but for simplicity’s sake we’ll just hand roll an email string.

/**
 * Build an email as an RFC 5322 formatted, Base64 encoded string
 * 
 * @param {String} to email address of the receiver
 * @param {String} from email address of the sender
 * @param {String} subject email subject
 * @param {String} message body of the email message
 */
function createEmail(to, from, subject, message) {
  let email = ["Content-Type: text/plain; charset=\"UTF-8\"\n",
    "MIME-Version: 1.0\n",
    "Content-Transfer-Encoding: 7bit\n",
    "to: ", to, "\n",
    "from: ", from, "\n",
    "subject: ", subject, "\n\n",
    message
  ].join('');

  return Base64.encodeURI(email);
}

Then the actual magic! Using our authenticated client and our formatted email to send the email. I’m not positive on this part, as I didn’t find a specific example that did it exactly as I was expecting (I also didn’t look too hard…)

/**
 * Send Message.
 *
 * @param  {String} userId User's email address. The special value 'me'
 * can be used to indicate the authenticated user.
 * @param  {String} email RFC 5322 formatted, Base64 encoded string.
 * @param {google.auth.OAuth2} oauth2Client The authorized OAuth2 client
 */
function sendMessage(email, oauth2Client) {
  google.oauth2("v2").google.gmail('v1').users.messages.send({
    auth: oauth2Client,
    userId: 'me',
    'resource': {
      'raw': email
    }
  });
}

Then it’s just a matter of stringing everything together. The invocation part of the script:

let to = 'mmonroe@gmail.com';
let from = 'ckent@gmail.com';
let subject = 'Email subject generated with NodeJS';
let message = 'Big long email body that has lots of interesting content';

readClientSecret('client_secret.json')
  .then(clientSecretJson => {
    let clientSecret = JSON.parse(clientSecretJson);
    return authorize(clientSecret);
  }).then(oauth2client => {
    let email = createEmail(to, from, subject, message);
    sendMessage(email, oauth2client);
  }).catch(error => {
    console.error(error);
  });

And that’s all. Executing this the first time prompts for the value shown in the output URL then sends the email, executing it subsequent times just sends the email. Easy enough!