Visualizing 3D printer Z axis offset

This is the z-height calibration model from the Prusa forum (source below). While the number is somewhat arbitrary (as in the values will mean nothing for a different printer) it represents the distance from the PINDA probe to the print bed, where a higher number means the nozzle should be closer to the bed. These images spans from “not even making a layer” to “too close to extrude”. This is printed in ~3 year old red M3D 3D Ink PLA. I was having trouble with bed adhesion, so I figured it was worth it to go all out and give this a shot. All 4 prints are identical everything except for z-offset. Source: https://shop.prusa3d.com/forum/assembly-and-first-prints-troubleshooting-f62/life-adjust-z-my-way-t2981.html

1 - Up4V1Rh

2 - iCFrF3h

A bit of an angle shot to hopefully show off some of the texture differences. You can kind of see that at 775 it’s starting to look a bit off, and at 925 it’s similarly starting to look off.

 

Advertisements

Dell 9550 battery replacement

I was a little unhappy with how my Dell 9550 had aged. It’s 2.5 years old, but it was really seeming like it was rough. The trackpad was being finicky, it was having a hard time registering right clicks. Additionally, it was getting super hot, having trouble sleeping, and the battery life was almost nonexistent. Dell had shipped me a new battery a few months earlier, but I never got around to swapping it out.

I sat down to finally do it and looked for the instructions. Turns out the trackpad issues were exactly why Dell sent out the new battery – the battery swells up and presses on the underside of the trackpad. I opened up the back of the laptop and the heat issues immediately became clear…

20180804_124712.jpg

First impression – it’s super gross. Have you noticed the trouble spot though?

20180804_124712 - Copy.jpg

That’s where the air is supposed to go in…

computer-filth.jpg

Pulled out as much as I can get, pretty damn gross…

Turns out that dust clogging fans causes overheating issues, which cause both performance problems and does terrible things to battery life. No idea how much of the difference was from replacing the battery and how much was from unclogging the fans, but it’s night and day. I haven’t heard the fans come on since and the battery life has moved from ~45 minutes to >6 hours. Worthwhile to not be lazy. Thanks Dell, it was nice of you to send out the battery unsolicited!

UFW, OpenVPN, forwarding traffic and not breaking everything

I’ve previously written about using OpenVPN to escape Xplornet’s double NAT. Every now and then I’ll set up a new server (following the steps there) and inevitably run into some firewall configuration problem. I’ve really never taken the time to understand how to use iptables. I understand that they’re theoretically simple, but amazingly I always have a hard time with them. To that end, I’ve used ufw to try and help.

The number one piece of advice for securing anything connected to the internet is to reduce the attack surface. Great:

sudo ufw default deny incoming
sudo ufw allow ssh

and now nothing works. Attack surface minimized!

Before going to far, I use the nuclear option for new or new-ish servers to ensure I know what I’m dealing with (NOTE: this leaves your server WIDE open, don’t stop here!):

// Reset ufw and disable
sudo ufw reset

// Flush all iptables rules
sudo iptables -F
sudo iptables -X
sudo iptables -t nat -F
sudo iptables -t nat -X
sudo iptables -t mangle -F
sudo iptables -t mangle -X
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT

This leaves me with:

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
 target prot opt source destination

Chain OUTPUT (policy ACCEPT)
 target prot opt source destination
$ sudo ufw status verbose
 Status: inactive

Awesome. Clean slate! Starting from the beginning again:

$ sudo ufw default deny incoming
Default incoming policy changed to 'deny'
(be sure to update your rules accordingly)
$ sudo ufw allow ssh
Rules updated
$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip
To       Action     From
--       ------     ----
22/tcp   ALLOW IN   Anywhere

For the whole OpenVPN set up to work, the VPN client needs to actually be able to connect to the server. We’ll need to allow traffic on 1194 (or whatever port you’ve configured OpenVPN to use).

$ sudo ufw allow 1194
Rule added

You’ll also need to allow traffic to whatever port it is you’re forwarding. For example, if I want port 3000 to be what I’m exposing to the public:

$ sudo ufw allow 3000
Rule added

Leaving us with:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To       Action     From
--       ------     ----
22/tcp   ALLOW IN   Anywhere
1194     ALLOW IN   Anywhere
3000     ALLOW IN   Anywhere

That’s about it for the more intuitive parts. The server is relatively locked down, although if you are using a fixed VPN client it may be worthwhile to white-list that single address. To allow the forwarding that the OpenVPN set up relies on, we’ll need to change the ufw default forward policy. Edit /etc/default/ufw and change the value of DEFAULT_OUTPUT_POLICY from DROP to ACCEPT:

$ sudo nano /etc/default/ufw
...

# Set the default output policy to ACCEPT, DROP, or REJECT. Please note that if
# you change this you will most likely want to adjust your rules.
DEFAULT_OUTPUT_POLICY="ACCEPT"

Then disable and re-enable ufw to update it:

$ sudo ufw disable && sudo ufw enable

Finally, adding the iptables rules used in the previous post (I’m sure there’s a way to do this with ufw, I just don’t know it):

$ sudo iptables -t nat -A PREROUTING -d 183.214.158.198 -p tcp --dport 3000 -j DNAT --to-dest 10.8.0.2:3000
$ sudo iptables -t nat -A POSTROUTING -d 10.8.0.2 -p tcp --dport 3000 -j SNAT --to-source 10.8.0.1

Et voilĂ ! A relatively locked down server that plays nicely with OpenVPN and forwarding traffic.

Replacing middle baffle support in Osburn 1600

We have an old Osburn 1600 freestanding stove (like these). Our middle baffle support rotted out. That’d be this piece in the stove (this isn’t exactly the model we have, but close enough):

StoveDiagram

Or how it looked in real life:

20161202_171451(1).jpg

It was holding the fire bricks up but it seemed like it wouldn’t be doing so for long. A couple money shots:

I contacted sbi-international.com and they helped me ensure I had the exact part number. $56.16 (after taxes) and a couple days later the piece showed up. Shockingly heavy, it’s just two pieces of steel and 6 welds. If I were more ambitious, I would have tried to weld it myself, but I already have too many projects on the go.

20171212_204614

Replacing it was actually really straightforward and easy. Nothing had to be disassembled,all the pieces are simply leaning on one another. There are 3 fire bricks on each side. There’s a bunch of space above the bricks (the smoke chamber), so pushing up the center brick is about as easy as can be. After that, the side bricks are trivial.

The remaining 5 bricks follow quickly, then the old middle baffle support basically fell out. Angle the new middle baffle support, put the bricks back in, and it’s all done.

20171214_233632.jpgTook about 15 minutes and the fireplace is good to go!

Xplornet double NAT: VPN edition

Previously, I wrote about using a reverse SSH tunnel to escape a double NAT (specifically, the one provided by Xplornet). Without looking into why (maybe poor, intermittent connection and particularly awful uplink), my previous solution was not stable. Even with autossh, the connection kept dropping and not picking back up. I’m exclusively accessing this remotely, so when I notice the service being down I’m in pretty much the worst position to fix it.

Grab a public server with a static IP – for example a 5$/month Linode or Droplet. I’ve seen reference to cheaper international products, but I have no experience with them.

Linode1024

If you’ve picked Linode:

  • deploy an Ubuntu image
  • boot it up
  • SSH to the machine
  • do regular server stuff – make sure it’s up to date, generally read over Digital Ocean’s guide here for inspiration

Set up OpenVPN server

On the public computer (OpenVPN server/cloud instance):

The first time I did this, I set up OpenVPN myself. It’s not awful, there are some pretty comprehensive guides (like this one), but it definitely sucks enough to look for an alternative. Googling around shows two compelling public scripts – Nyr’s openvpn-install and Angristan’s version based off Nyr’s. Looking over the two, I ended up picking Angristan’s version without all that much consideration.

SSH to the machine and execute the script on your pubic server to set up the certificates and keys for your client. The defaults for the script all seem sensible – you don’t have to feel bad if you just mash enter until the name prompt comes up, then give your client a reasonable name

$ wget https://raw.githubusercontent.com/Angristan/OpenVPN-install/master/openvpn-install.sh
$ chmod +x openvpn-install.sh
$ ./openvpn-install.sh

You should notice at the end of the script execution a line that looks something like this:

...
Finished!

Your client config is available at /root/unifi-video-server.ovpn
If you want to add more clients, you simply need to run this script another time!

Take note of the location of your .ovpn file, as you’ll need it for the next step.

Set up OpenVPN client

On the private computer (machine that’s behind the double NAT):

On your client machine, get the OVPN configuration file that was generated from the previous step. scp is likely the easiest way to do this. From the client machine, you can retrieve the file like:

scp {server user}@{server host}:{remote path to ovpn} {local path}

For example:

$ scp root@37.48.80.202:/root/unifi-video-server.ovpn .

This will copy the file to the current directory on the machine. An extremely quick sanity check to ensure you can connect:

sudo openvpn unifi-video-server.ovpn

You should see:

Initialization Sequence Completed

once you do, you can ctrl + c your way out. If this wasn’t successful… something has gone wrong and you should fix it.

To make sure your client connects on start up:

  • rename your .ovpn file to be a .conf file
  • move the .conf file to /etc/ovpn
  • Edit /etc/default/openvpn to ensure AUTOSTART is configured to start your connection

At this stage, you have an OpenVPN server set up and an OpenVPN client that automatically connects to the server. All that’s left is to do the internet part.

Set up server traffic forwarding to client

On the public computer (OpenVPN server/cloud instance):

What we want now is to forward traffic that hits a particular port on the public server to the private computer. Not only that, but you want the private computer to think the traffic is coming from the public server, so it doesn’t respond directly to whoever sent the internet request.

First things first, toggle the server to allow forwarding traffic (if you don’t do this, you’ll end up insanely frustrated and convinced iptables is the devil:

sysctl -w net.ipv4.ip_forward=1

We need two pieces of information:

  • the public WAN (internet) IP address of the server
  • the virtual address of the OpenVPN client

Finding the public address can be done with:

$ curl ipinfo.io/ip
37.48.80.202

The virtual address of the OpenVPN client can be found in the OpenVPN status log with the client connected (see above for how to set up the connection for now). The log seems like it’s either in either /etc/openvpn/openvpn-status.log or /etc/openvpn/openvpn.log

$ cat /etc/openvpn/openvpn.log
OpenVPN CLIENT LIST
Updated,Sun Nov 5 01:37:33 2017
Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since
unifi-video-server,37.48.80.202:49014,39837,52165,Sun Nov 5 01:02:05 2017
ROUTING TABLE
Virtual Address,Common Name,Real Address,Last Ref
10.8.0.2,unifi-video-server,37.48.80.202:49014,Sun Nov 5 01:36:54 2017
GLOBAL STATS
Max bcast/mcast queue length,1
END

Now we’ll need a source routing NAT rule and a destination routing NAT rule for every port that is going to be forwarded. They’ll look something like this:

iptables -t nat -A PREROUTING -d {server WAN IP} -p tcp --dport {port} -j DNAT --to-dest {client virtual address}:{port}
iptables -t nat -A POSTROUTING -d {client virtual address} -p tcp --dport {port} -j SNAT --to-source {server virtual address}

Practically speaking, with the following:

  • public server whose Internet accessible IP address is 37.48.80.202
  • public server whose OpenVPN virtual address is 10.8.0.1
  • private computer whose OpenVPN virtual address is 10.8.0.2
  • Forwarding port 7080 on the public server to port 7080 on the private computer

It’d look something like this:

iptables -t nat -A PREROUTING -d 37.48.80.202 -p tcp --dport 7080 -j DNAT --to-dest 10.8.0.2:7080
iptables -t nat -A POSTROUTING -d 10.8.0.2 -p tcp --dport 7080 -j SNAT --to-source 10.8.0.1

Now the only thing left is to make sure the routing rules persist across reboots.

$ sudo apt install iptables-persistent
$ sudo netfilter-persistent save
$ sudo netfilter-persistent reload

And that’s it. In my experience this seems to be both a more robust solution to the double NAT problem, and uses tools in a more conventional way. I visited 37.48.80.202:7080, and (subject to the awful uplink speed from Xplornet), my page loaded!

Git: Determine which branches have been merged into any of a set of branches

Here’s my implementation (note that I’m neither a git expert nor a shell scripting expert):

1. Determine the set of branches that, when another branch has been merged into it, make up the modified meaning of a branch having been merged
2. Determine a pattern that narrows the list of all branches to only the branches in the previous set. For me, it was origin/release
3. Do everything else:

git branch --remote --list origin/release/* --format="%(objectname)" | xargs -n1 -I {} git branch --remote --merged {}

 


What use is this?

Git has functionality to determine which branches are already merged into a specified branch (see git branch documentation and the git branch --merged flag in particular). This works well if you’re looking at a single branch at a time. The product I work on during the day has many developers working on multiple different releases at any one time – usually ~5 versions of the product are deployed and covered by service level agreements that ensure they’re continually supported. This is the reality for a great many enterprise applications deployed on customer infrastructure – continuous deployment just isn’t a thing without massive investment from all involved.

I found that developers were not good at cleaning up feature branches after they’ve merged them into their respective release stream. As a first step, I wanted to understand how many branches were actually merged, where “merged” is defined as “merged into any of the various release branches”. I’m suspicious that git has this functionality somewhere, but I wasn’t able to find it.

How to run TypeScript in the browser

Short answer

You can’t, that’s not a thing (at least so far).

Longer answer:

By building an overly complicated front-end tool chain! Seriously, it’s crazy how much this is not out-of-box. Preface, as always, is that I don’t really know what I’m doing, so I certainly wouldn’t recommend this for any real projects – I’m just using it for experimentation.

Tools needed

  • NodeJS
    • JavaScript run-time environment
    • Needed to actually run JavaScript and all the tooling
  • TypeScript
    • Typed language that compiles to JavaScript, comes with a compiler
    • This is what we want to write!
  • Babel
    • JavaScript to JavaScript compiler with ability to polyfill new features into older versions
    • Needed to convert the version of JavaScript we’re writing to a version browsers can execute
  • Webpack
    • Asset bundler – decouples your development project structure from the deliverable
    • Not strictly needed, but extremely useful for any “real” project

Steps

It’ll look something like this when all done:

TypeScriptProcess(1)

  1. Write code in TypeScript
  2. Use TypeScript compiler to compile TypeScript into a recent version of JavaScript, without providing backwards compatibility or browser polyfilling
  3. Use Babel compiler to turn recent version of JavaScript, which browsers can’t natively execute, into a version browsers can execute
  4. Use Webpack to grab your assortment of JavaScript files, organized however you want for development, and create a more easily deliverable “bundle” of everything

From the beginning, that means:

    1. Install NodeJS (use the latest version unless you have reason to do otherwise)
    2. Create your project
$ yarn init
yarn init v0.27.5
question name (typescript-front-end-seed): 
question version (1.0.0): 
question description: Seed project for TypeScript in the browser
question entry point (index.js): 
question repository url (https://github.com/tobymurray/typescript-front-end-seed.git): 
question author (Toby Murray <murray.toby+github@gmail.com>): 
question license (MIT): 
success Saved package.json
Done in 34.38s.
    1. Add all the dependencies we’ll need – TypeScript, Babel (note Babel by itself doesn’t really do anything, you need to include a plugin), and Webpack
$ yarn add -D typescript babel-cli babel-preset-env webpack
    1. Create whatever project structure you want. I’ll do something like src/ for TypeScript code, and public/ for static files (e.g. HTML).
$ mkdir src public
$ touch public/index.html src/index.ts
    1. Create the configuration files you’ll need for all the tools – tsconfig.json for TypeScript, .babelrc for Babel and webpack.config.js for Webpack
$ touch tsconfig.json .babelrc webpack.config.js

Tool configuration

Now comes either the interesting part or the awful part, depending on your perspective – configuring all the tools to do what we want! To keep things clear, we’ll place the output of the TypeScript compiler into a tsc folder, then we’ll feed that as input into Babel. The output of Babel will go into a babel folder. We’ll then use Webpack to consume the contents of the Babel folder and put it in a dist folder (this is what we’d actually serve up to a client browser).

TypeScript

Keeping this as simple as possible (there are plenty of options to play with), the two big decisions are what to use as the target and module version to use. Fortunately, we don’t really have to care too much, it just has to be consumable by Babel. To get all the features possible (mmm, delicious features), we can target e.g. ES2017, and use commonjs.

{
  "compilerOptions": {
    "target": "es2017",      /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', or 'ESNEXT'. */
    "module": "commonjs",    /* Specify module code generation: 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
    "rootDir": "./src",      /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
    "outDir": "./build-tsc", /* Redirect output structure to the directory. */
  }
}

Babel

Again, doing as little as possible, we’ll tell Babel to do whatever it needs to do to target apparently 95% of user’s browsers. For some reason, Babel does not support setting the output directory in the configuration file (see options here), it has to be passed as an argument to the invocation of Babel.

{
  "presets": [
    ["env", {
      "targets": {
        "browsers": ["last 2 versions", "safari >= 7"]
      }
    }]
  ]
}

Webpack

Likewise, for the start Webpack doesn’t have to be that complicated. We’ll include source maps here, don’t feel obliged to do so though.

const path = require('path');

module.exports = {
  devtool: "source-map",
  entry: './build-babel/index.js',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist')
  }
};

package.json

To avoid having to remember anything, a few scripts in package.json can be useful. Breaking them out so it’s clear what step is doing what, it could look like this:

"scripts": {
  "clean": "yarn run clean-build-steps && rm -rf dist",
  "tsc": "./node_modules/.bin/tsc",
  "babel": "./node_modules/.bin/babel build-tsc --out-dir build-babel --source-maps",
  "webpack": "webpack && cp public/* dist",
  "clean-build-steps": "rm -rf build-tsc build-babel",
  "build": "yarn run clean && yarn run tsc && yarn run babel && yarn run webpack && yarn run clean-build-steps"
}

Build

Running yarn build (after the initial install) will:

  1. Clean anything from previous executions of the script
    1. This includes any leftover build artifacts, as well as the dist directory
  2. Use the TypeScript compiler to take everything from the src directory, transpile it to ES2017 JavaScript, and output it into the build-tsc directory
  3. Use Babel to convert everything in the build-tsc directory from ES2017 to ES2015 and output it into build-babel
  4. Use Webpack:
    1. Look in the build-babel folder
    2. Find index.js
    3. Parse index.js as an entrypoint, and resolve dependencies
    4. Add everything needed into one big bundle.js
  5. Create the “deployable” directory
    1. Copy the static HTML into the dist directory
    2. Copy the bundle.js into the dist directory

Serve

With something like http-server and serving the dist directory, we can see the product of our work!

$ http-server dist
Starting up http-server, serving dist
Available on:
  http://127.0.0.1:8080
  http://10.0.2.15:8080
  http://172.17.0.1:8080
Hit CTRL-C to stop the server

See the GitHub repository here and the deployed example here.