Category Archives: Uncategorized

Incubator build: Part 0 – Introduction

I want to build an incubator for chicken/duck/general fowl eggs. More than that, I want to actually finish the project (hatch out birds) AND I want the incubator to not suck. All 3 of these goals are particularly challenging for me, as I tend to lose interest in a project once it’s ~80% complete and never actually get to the part where it works, or it will kinda work but be fragile and miss the target. Additionally, the difference between the MVP of something and a version that is “nice” is absolutely massive. Part of that is generally re-starting some aspect, as now you actually understand the problem you’re trying to solve. The last few years are littered with the corpses of projects I have abandoned at various stages, most of them depressingly expensive (in both time and money) and depressingly incomplete.

I started trying to build an incubator over 2 years ago. The very first incubator-looking code I produced was from March of 2018. For people who know anything about electronics or hardware, that’s a bit preposterous. It’s really not that hard – incubators are pretty simple. Unless, of course, you know nothing about electronics, or soldering, or embedded development or incubating or… That’s where I started.

Through a whole bunch of trial and error, the first ever version kind of worked. I was starting with the ESP32 microcontroller. Knowing nothing about anything, I started with the Arduino wrapper for ESP32. This got me up and running pretty quickly, and there’s some kind of library for virtually every sensor you’re likely to come across. That said, the quality of some of the libraries can be pretty suspect, and the interface exposed by them often is not quite what you want. Of course, good luck determining the wheat from the chaff when you have no idea what you’re doing.

Overall, the theme of the project seems to have been: even if you’re peripherally familiar with something, having to actually implement it and make it robust from a hardware/electronics perspective means you have to actually UNDERSTAND it. Not necessarily all the way down to the physics of it, but way more intimately than what is required for web application development (which I’m familiar with). Some of the stuff I either encountered for the first time, or actually had to start trying to understand at a practical level:

  • voltage, resistance, current, power, grounding, and (much, much later on) capacitance
  • soldering, crimping, terminals, solid vs. stranded wire, wire gauge, breadboards, prototoboards
  • power supplies, stepper motors, logic level shifting, transistors, regulators, mechanical and solid state relays
  • analog sensors, digital sensors, serial communication, SPI, I²C
  • volatile vs. non-volatile memory, microcontroller power management
  • Arduino environment, ESP-IDF, cmake
  • MQTT, Eclispe Mosquitto, Postgres (TimescaleDB), Grafana, CAD, 3D printing

I fully realize people successfully hatch eggs with a cooler, an incandescent light bulb, and a container of water. As I stated at the beginning; I wanted to do this, I wanted to do it well, and I wanted to finish it. That meant a significant portion of this was straight up learning.

How to melt a stepper motor

Summary: Power a 5V stepper with 12V, and leave most of the coils energized for long periods of time

I got one of the ubiquitous packs of 5 28BYJ-48 + ULN2003 stepper driver boards. It was 18.98$ CAD in 2018 for 5 – a very reasonable price. I like cheap things like this because I don’t know anything and I’ll wreck them for sure. Also, if you can get something to work with one of these motors, then it’s easy enough to swap out for a “real” stepper after the fact, so there are few downsides.

I tried to use one of these in a project and it was my first time doing so for more than a few seconds (or just enough to prove I could turn it). I’m using an ESP32, and specifically using the Espressif IoT Development Framework (ESP-IDF) to build the software. I didn’t immediately find a stepper driver library that looked nice , and I had the step sequence on hand, so I figured I’d write my own driver. It’s nothing fancy – an array of steps, and a loop that sets the 4 pins to the corresponding value for each step in the sequence.

The included ULN2003 stepper boards show that they can take 5-12V, and as they came with a 5V stepper I assumed (I know…) the board would step the voltage down and regulate it. All it does is pass the supplied voltage on to the stepper motor though, so if you want to run a 12V stepper – you give it 12V, if you want to run a 5V stepper… you probably ought to give it 5V. In my mind, I was doing the right thing as the higher voltage would mean less current and less heat. I now realize that’s wrong for more than one reason.

As I don’t know anything, I had it do a semi-arbitrary number of steps (500), then “sleep” for a couple hours. What I didn’t pay any attention to was the coil energization between these two rotation periods. If unlucky, it could be resting with 3 out of the 4 coils energized for the entire time. The worst part is there was no reason for it – there was no load on the stepper, so it really didn’t need to hold its position.

Anyways, these look like Nylon gears to me, which means almost definitely the motor got to over 100°C (which is pretty horrifying). Now I have 4 cheap stepper motors to play with…

I didn’t even notice until I posted the pictures, but it looks like the drive gear melted completely and turned into a puddle. I assumed I had lost it while taking it apart, but obviously this motor was even worse off than I thought.

Li-Ion to 3.3 V Buck-boost Converter

Looking at powering an ESP32 from Li-lon batteries, specifically NCR18650B (3400mAh) I tried to build the circuit in Random Nerd Tutorial’s Power ESP32/ESP8266 with Solar Panels. There a MCP1700-3302E LDO regulator (PDF warning) is suggested, but when using the NodeMCU ESP-32S it could not start up Wi-Fi reliably. Every now and then it would work, but my guess is the 250mA limit was not quite enough to satisfy a current spike as the Wi-Fi turns on.

It could be that my particular board was deficient in some way, but I wanted more flexibility on the input voltage end of things anyways (e.g. boost when voltage is too low). When looking around, two chips stood out to me, the TPS63020 and TPS63060 (PDFs),

TPS63020 TPS63060
Input voltage 1.8 V – 5.5 V 2.5 V – 12 V
Output voltage 1.2 V – 5.5 V 2.5 V – 8 V
Output current @3.3 V (VIN> 2.5 V): 2A @5 V (VIN <10 V): 2 A in Buck Mode
@5 V (VIN>4 V): 1.3 A in Boost Mode
Quiescent current 25 μA < 30 μA
Operating Temperature (°C) -40 to 85 -40 to 85

They don’t seem to be sold on boards particularly commonly, so I ordered a couple of whatever I could find. I ordered two boards with the TPS63020 on them (this one, for ~13.37$ CAD, and this one, for ~7.28$ CAD) and one board with the TPS63060 on it (this one, for ~5.11$ CAD).

All of them are slightly more expensive than I was hoping, but the specs are much more in line with what I wanted compared to the LDO regulator. I’d love to find a TPS63021 (the fixed 3.3 V output chip) to play with, but no luck so far.

Visualizing 3D printer Z axis offset

This is the z-height calibration model from the Prusa forum (source below). While the number is somewhat arbitrary (as in the values will mean nothing for a different printer) it represents the distance from the PINDA probe to the print bed, where a higher number means the nozzle should be closer to the bed. These images spans from “not even making a layer” to “too close to extrude”. This is printed in ~3 year old red M3D 3D Ink PLA. I was having trouble with bed adhesion, so I figured it was worth it to go all out and give this a shot. All 4 prints are identical everything except for z-offset. Source:

1 - Up4V1Rh

2 - iCFrF3h

A bit of an angle shot to hopefully show off some of the texture differences. You can kind of see that at 775 it’s starting to look a bit off, and at 925 it’s similarly starting to look off.


Dell 9550 battery replacement

I was a little unhappy with how my Dell 9550 had aged. It’s 2.5 years old, but it was really seeming like it was rough. The trackpad was being finicky, it was having a hard time registering right clicks. Additionally, it was getting super hot, having trouble sleeping, and the battery life was almost nonexistent. Dell had shipped me a new battery a few months earlier, but I never got around to swapping it out.

I sat down to finally do it and looked for the instructions. Turns out the trackpad issues were exactly why Dell sent out the new battery – the battery swells up and presses on the underside of the trackpad. I opened up the back of the laptop and the heat issues immediately became clear…


First impression – it’s super gross. Have you noticed the trouble spot though?

20180804_124712 - Copy.jpg

That’s where the air is supposed to go in…


Pulled out as much as I can get, pretty damn gross…

Turns out that dust clogging fans causes overheating issues, which cause both performance problems and does terrible things to battery life. No idea how much of the difference was from replacing the battery and how much was from unclogging the fans, but it’s night and day. I haven’t heard the fans come on since and the battery life has moved from ~45 minutes to >6 hours. Worthwhile to not be lazy. Thanks Dell, it was nice of you to send out the battery unsolicited!

UFW, OpenVPN, forwarding traffic and not breaking everything

I’ve previously written about using OpenVPN to escape Xplornet’s double NAT. Every now and then I’ll set up a new server (following the steps there) and inevitably run into some firewall configuration problem. I’ve really never taken the time to understand how to use iptables. I understand that they’re theoretically simple, but amazingly I always have a hard time with them. To that end, I’ve used ufw to try and help.

The number one piece of advice for securing anything connected to the internet is to reduce the attack surface. Great:

sudo ufw default deny incoming
sudo ufw allow ssh

and now nothing works. Attack surface minimized!

Before going to far, I use the nuclear option for new or new-ish servers to ensure I know what I’m dealing with (NOTE: this leaves your server WIDE open, don’t stop here!):

// Reset ufw and disable
sudo ufw reset

// Flush all iptables rules
sudo iptables -F
sudo iptables -X
sudo iptables -t nat -F
sudo iptables -t nat -X
sudo iptables -t mangle -F
sudo iptables -t mangle -X
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT

This leaves me with:

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
 target prot opt source destination

Chain OUTPUT (policy ACCEPT)
 target prot opt source destination
$ sudo ufw status verbose
 Status: inactive

Awesome. Clean slate! Starting from the beginning again:

$ sudo ufw default deny incoming
Default incoming policy changed to 'deny'
(be sure to update your rules accordingly)
$ sudo ufw allow ssh
Rules updated
$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip
To       Action     From
--       ------     ----
22/tcp   ALLOW IN   Anywhere

For the whole OpenVPN set up to work, the VPN client needs to actually be able to connect to the server. We’ll need to allow traffic on 1194 (or whatever port you’ve configured OpenVPN to use).

$ sudo ufw allow 1194
Rule added

You’ll also need to allow traffic to whatever port it is you’re forwarding. For example, if I want port 3000 to be what I’m exposing to the public:

$ sudo ufw allow 3000
Rule added

Leaving us with:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To       Action     From
--       ------     ----
22/tcp   ALLOW IN   Anywhere
1194     ALLOW IN   Anywhere
3000     ALLOW IN   Anywhere

That’s about it for the more intuitive parts. The server is relatively locked down, although if you are using a fixed VPN client it may be worthwhile to white-list that single address. To allow the forwarding that the OpenVPN set up relies on, we’ll need to change the ufw default forward policy. Edit /etc/default/ufw and change the value of DEFAULT_OUTPUT_POLICY from DROP to ACCEPT:

$ sudo nano /etc/default/ufw

# Set the default output policy to ACCEPT, DROP, or REJECT. Please note that if
# you change this you will most likely want to adjust your rules.

Then disable and re-enable ufw to update it:

$ sudo ufw disable && sudo ufw enable

Finally, adding the iptables rules used in the previous post (I’m sure there’s a way to do this with ufw, I just don’t know it):

$ sudo iptables -t nat -A PREROUTING -d -p tcp --dport 3000 -j DNAT --to-dest
$ sudo iptables -t nat -A POSTROUTING -d -p tcp --dport 3000 -j SNAT --to-source

Et voilà! A relatively locked down server that plays nicely with OpenVPN and forwarding traffic.

Replacing middle baffle support in Osburn 1600

We have an old Osburn 1600 freestanding stove (like these). Our middle baffle support rotted out. That’d be this piece in the stove (this isn’t exactly the model we have, but close enough):


Or how it looked in real life:


It was holding the fire bricks up but it seemed like it wouldn’t be doing so for long. A couple money shots:

I contacted and they helped me ensure I had the exact part number. $56.16 (after taxes) and a couple days later the piece showed up. Shockingly heavy, it’s just two pieces of steel and 6 welds. If I were more ambitious, I would have tried to weld it myself, but I already have too many projects on the go.


Replacing it was actually really straightforward and easy. Nothing had to be disassembled,all the pieces are simply leaning on one another. There are 3 fire bricks on each side. There’s a bunch of space above the bricks (the smoke chamber), so pushing up the center brick is about as easy as can be. After that, the side bricks are trivial.

The remaining 5 bricks follow quickly, then the old middle baffle support basically fell out. Angle the new middle baffle support, put the bricks back in, and it’s all done.

20171214_233632.jpgTook about 15 minutes and the fireplace is good to go!

Xplornet double NAT: VPN edition

Previously, I wrote about using a reverse SSH tunnel to escape a double NAT (specifically, the one provided by Xplornet). Without looking into why (maybe poor, intermittent connection and particularly awful uplink), my previous solution was not stable. Even with autossh, the connection kept dropping and not picking back up. I’m exclusively accessing this remotely, so when I notice the service being down I’m in pretty much the worst position to fix it.

Grab a public server with a static IP – for example a 5$/month Linode or Droplet. I’ve seen reference to cheaper international products, but I have no experience with them.


If you’ve picked Linode:

  • deploy an Ubuntu image
  • boot it up
  • SSH to the machine
  • do regular server stuff – make sure it’s up to date, generally read over Digital Ocean’s guide here for inspiration

Set up OpenVPN server

On the public computer (OpenVPN server/cloud instance):

The first time I did this, I set up OpenVPN myself. It’s not awful, there are some pretty comprehensive guides (like this one), but it definitely sucks enough to look for an alternative. Googling around shows two compelling public scripts – Nyr’s openvpn-install and Angristan’s version based off Nyr’s. Looking over the two, I ended up picking Angristan’s version without all that much consideration.

SSH to the machine and execute the script on your pubic server to set up the certificates and keys for your client. The defaults for the script all seem sensible – you don’t have to feel bad if you just mash enter until the name prompt comes up, then give your client a reasonable name

$ wget
$ chmod +x
$ ./

You should notice at the end of the script execution a line that looks something like this:


Your client config is available at /root/unifi-video-server.ovpn
If you want to add more clients, you simply need to run this script another time!

Take note of the location of your .ovpn file, as you’ll need it for the next step.

Set up OpenVPN client

On the private computer (machine that’s behind the double NAT):

On your client machine, get the OVPN configuration file that was generated from the previous step. scp is likely the easiest way to do this. From the client machine, you can retrieve the file like:

scp {server user}@{server host}:{remote path to ovpn} {local path}

For example:

$ scp root@ .

This will copy the file to the current directory on the machine. An extremely quick sanity check to ensure you can connect:

sudo openvpn unifi-video-server.ovpn

You should see:

Initialization Sequence Completed

once you do, you can ctrl + c your way out. If this wasn’t successful… something has gone wrong and you should fix it.

To make sure your client connects on start up:

  • rename your .ovpn file to be a .conf file
  • move the .conf file to /etc/ovpn
  • Edit /etc/default/openvpn to ensure AUTOSTART is configured to start your connection

At this stage, you have an OpenVPN server set up and an OpenVPN client that automatically connects to the server. All that’s left is to do the internet part.

Set up server traffic forwarding to client

On the public computer (OpenVPN server/cloud instance):

What we want now is to forward traffic that hits a particular port on the public server to the private computer. Not only that, but you want the private computer to think the traffic is coming from the public server, so it doesn’t respond directly to whoever sent the internet request.

First things first, toggle the server to allow forwarding traffic (if you don’t do this, you’ll end up insanely frustrated and convinced iptables is the devil:

sysctl -w net.ipv4.ip_forward=1

We need two pieces of information:

  • the public WAN (internet) IP address of the server
  • the virtual address of the OpenVPN client

Finding the public address can be done with:

$ curl

The virtual address of the OpenVPN client can be found in the OpenVPN status log with the client connected (see above for how to set up the connection for now). The log seems like it’s either in either /etc/openvpn/openvpn-status.log or /etc/openvpn/openvpn.log

$ cat /etc/openvpn/openvpn.log
Updated,Sun Nov 5 01:37:33 2017
Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since
unifi-video-server,,39837,52165,Sun Nov 5 01:02:05 2017
Virtual Address,Common Name,Real Address,Last Ref,unifi-video-server,,Sun Nov 5 01:36:54 2017
Max bcast/mcast queue length,1

Now we’ll need a source routing NAT rule and a destination routing NAT rule for every port that is going to be forwarded. They’ll look something like this:

iptables -t nat -A PREROUTING -d {server WAN IP} -p tcp --dport {port} -j DNAT --to-dest {client virtual address}:{port}
iptables -t nat -A POSTROUTING -d {client virtual address} -p tcp --dport {port} -j SNAT --to-source {server virtual address}

Practically speaking, with the following:

  • public server whose Internet accessible IP address is
  • public server whose OpenVPN virtual address is
  • private computer whose OpenVPN virtual address is
  • Forwarding port 7080 on the public server to port 7080 on the private computer

It’d look something like this:

iptables -t nat -A PREROUTING -d -p tcp --dport 7080 -j DNAT --to-dest
iptables -t nat -A POSTROUTING -d -p tcp --dport 7080 -j SNAT --to-source

Now the only thing left is to make sure the routing rules persist across reboots.

$ sudo apt install iptables-persistent
$ sudo netfilter-persistent save
$ sudo netfilter-persistent reload

And that’s it. In my experience this seems to be both a more robust solution to the double NAT problem, and uses tools in a more conventional way. I visited, and (subject to the awful uplink speed from Xplornet), my page loaded!

Git: Determine which branches have been merged into any of a set of branches

Here’s my implementation (note that I’m neither a git expert nor a shell scripting expert):

1. Determine the set of branches that, when another branch has been merged into it, make up the modified meaning of a branch having been merged
2. Determine a pattern that narrows the list of all branches to only the branches in the previous set. For me, it was origin/release
3. Do everything else:

git branch --remote --list origin/release/* --format="%(objectname)" | xargs -n1 -I {} git branch --remote --merged {}


What use is this?

Git has functionality to determine which branches are already merged into a specified branch (see git branch documentation and the git branch --merged flag in particular). This works well if you’re looking at a single branch at a time. The product I work on during the day has many developers working on multiple different releases at any one time – usually ~5 versions of the product are deployed and covered by service level agreements that ensure they’re continually supported. This is the reality for a great many enterprise applications deployed on customer infrastructure – continuous deployment just isn’t a thing without massive investment from all involved.

I found that developers were not good at cleaning up feature branches after they’ve merged them into their respective release stream. As a first step, I wanted to understand how many branches were actually merged, where “merged” is defined as “merged into any of the various release branches”. I’m suspicious that git has this functionality somewhere, but I wasn’t able to find it.

How to run TypeScript in the browser

Short answer

You can’t, that’s not a thing (at least so far).

Longer answer:

By building an overly complicated front-end tool chain! Seriously, it’s crazy how much this is not out-of-box. Preface, as always, is that I don’t really know what I’m doing, so I certainly wouldn’t recommend this for any real projects – I’m just using it for experimentation.

Tools needed

  • NodeJS
    • JavaScript run-time environment
    • Needed to actually run JavaScript and all the tooling
  • TypeScript
    • Typed language that compiles to JavaScript, comes with a compiler
    • This is what we want to write!
  • Babel
    • JavaScript to JavaScript compiler with ability to polyfill new features into older versions
    • Needed to convert the version of JavaScript we’re writing to a version browsers can execute
  • Webpack
    • Asset bundler – decouples your development project structure from the deliverable
    • Not strictly needed, but extremely useful for any “real” project


It’ll look something like this when all done:


  1. Write code in TypeScript
  2. Use TypeScript compiler to compile TypeScript into a recent version of JavaScript, without providing backwards compatibility or browser polyfilling
  3. Use Babel compiler to turn recent version of JavaScript, which browsers can’t natively execute, into a version browsers can execute
  4. Use Webpack to grab your assortment of JavaScript files, organized however you want for development, and create a more easily deliverable “bundle” of everything

From the beginning, that means:

    1. Install NodeJS (use the latest version unless you have reason to do otherwise)
    2. Create your project
$ yarn init
yarn init v0.27.5
question name (typescript-front-end-seed): 
question version (1.0.0): 
question description: Seed project for TypeScript in the browser
question entry point (index.js): 
question repository url ( 
question author (Toby Murray <>): 
question license (MIT): 
success Saved package.json
Done in 34.38s.
    1. Add all the dependencies we’ll need – TypeScript, Babel (note Babel by itself doesn’t really do anything, you need to include a plugin), and Webpack
$ yarn add -D typescript babel-cli babel-preset-env webpack
    1. Create whatever project structure you want. I’ll do something like src/ for TypeScript code, and public/ for static files (e.g. HTML).
$ mkdir src public
$ touch public/index.html src/index.ts
    1. Create the configuration files you’ll need for all the tools – tsconfig.json for TypeScript, .babelrc for Babel and webpack.config.js for Webpack
$ touch tsconfig.json .babelrc webpack.config.js

Tool configuration

Now comes either the interesting part or the awful part, depending on your perspective – configuring all the tools to do what we want! To keep things clear, we’ll place the output of the TypeScript compiler into a tsc folder, then we’ll feed that as input into Babel. The output of Babel will go into a babel folder. We’ll then use Webpack to consume the contents of the Babel folder and put it in a dist folder (this is what we’d actually serve up to a client browser).


Keeping this as simple as possible (there are plenty of options to play with), the two big decisions are what to use as the target and module version to use. Fortunately, we don’t really have to care too much, it just has to be consumable by Babel. To get all the features possible (mmm, delicious features), we can target e.g. ES2017, and use commonjs.

  "compilerOptions": {
    "target": "es2017",      /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', or 'ESNEXT'. */
    "module": "commonjs",    /* Specify module code generation: 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
    "rootDir": "./src",      /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
    "outDir": "./build-tsc", /* Redirect output structure to the directory. */


Again, doing as little as possible, we’ll tell Babel to do whatever it needs to do to target apparently 95% of user’s browsers. For some reason, Babel does not support setting the output directory in the configuration file (see options here), it has to be passed as an argument to the invocation of Babel.

  "presets": [
    ["env", {
      "targets": {
        "browsers": ["last 2 versions", "safari >= 7"]


Likewise, for the start Webpack doesn’t have to be that complicated. We’ll include source maps here, don’t feel obliged to do so though.

const path = require('path');

module.exports = {
  devtool: "source-map",
  entry: './build-babel/index.js',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist')


To avoid having to remember anything, a few scripts in package.json can be useful. Breaking them out so it’s clear what step is doing what, it could look like this:

"scripts": {
  "clean": "yarn run clean-build-steps && rm -rf dist",
  "tsc": "./node_modules/.bin/tsc",
  "babel": "./node_modules/.bin/babel build-tsc --out-dir build-babel --source-maps",
  "webpack": "webpack && cp public/* dist",
  "clean-build-steps": "rm -rf build-tsc build-babel",
  "build": "yarn run clean && yarn run tsc && yarn run babel && yarn run webpack && yarn run clean-build-steps"


Running yarn build (after the initial install) will:

  1. Clean anything from previous executions of the script
    1. This includes any leftover build artifacts, as well as the dist directory
  2. Use the TypeScript compiler to take everything from the src directory, transpile it to ES2017 JavaScript, and output it into the build-tsc directory
  3. Use Babel to convert everything in the build-tsc directory from ES2017 to ES2015 and output it into build-babel
  4. Use Webpack:
    1. Look in the build-babel folder
    2. Find index.js
    3. Parse index.js as an entrypoint, and resolve dependencies
    4. Add everything needed into one big bundle.js
  5. Create the “deployable” directory
    1. Copy the static HTML into the dist directory
    2. Copy the bundle.js into the dist directory


With something like http-server and serving the dist directory, we can see the product of our work!

$ http-server dist
Starting up http-server, serving dist
Available on:
Hit CTRL-C to stop the server

See the GitHub repository here and the deployed example here.