Author Archives: tobymurray

How to run TypeScript in the browser

Short answer

You can’t, that’s not a thing (at least so far).

Longer answer:

By building an overly complicated front-end tool chain! Seriously, it’s crazy how much this is not out-of-box. Preface, as always, is that I don’t really know what I’m doing, so I certainly wouldn’t recommend this for any real projects – I’m just using it for experimentation.

Tools needed

  • NodeJS
    • JavaScript run-time environment
    • Needed to actually run JavaScript and all the tooling
  • TypeScript
    • Typed language that compiles to JavaScript, comes with a compiler
    • This is what we want to write!
  • Babel
    • JavaScript to JavaScript compiler with ability to polyfill new features into older versions
    • Needed to convert the version of JavaScript we’re writing to a version browsers can execute
  • Webpack
    • Asset bundler – decouples your development project structure from the deliverable
    • Not strictly needed, but extremely useful for any “real” project

Steps

It’ll look something like this when all done:

TypeScriptProcess(1)

  1. Write code in TypeScript
  2. Use TypeScript compiler to compile TypeScript into a recent version of JavaScript, without providing backwards compatibility or browser polyfilling
  3. Use Babel compiler to turn recent version of JavaScript, which browsers can’t natively execute, into a version browsers can execute
  4. Use Webpack to grab your assortment of JavaScript files, organized however you want for development, and create a more easily deliverable “bundle” of everything

From the beginning, that means:

    1. Install NodeJS (use the latest version unless you have reason to do otherwise)
    2. Create your project
$ yarn init
yarn init v0.27.5
question name (typescript-front-end-seed): 
question version (1.0.0): 
question description: Seed project for TypeScript in the browser
question entry point (index.js): 
question repository url (https://github.com/tobymurray/typescript-front-end-seed.git): 
question author (Toby Murray <murray.toby+github@gmail.com>): 
question license (MIT): 
success Saved package.json
Done in 34.38s.
    1. Add all the dependencies we’ll need – TypeScript, Babel (note Babel by itself doesn’t really do anything, you need to include a plugin), and Webpack
$ yarn add -D typescript babel-cli babel-preset-env webpack
    1. Create whatever project structure you want. I’ll do something like src/ for TypeScript code, and public/ for static files (e.g. HTML).
$ mkdir src public
$ touch public/index.html src/index.ts
    1. Create the configuration files you’ll need for all the tools – tsconfig.json for TypeScript, .babelrc for Babel and webpack.config.js for Webpack
$ touch tsconfig.json .babelrc webpack.config.js

Tool configuration

Now comes either the interesting part or the awful part, depending on your perspective – configuring all the tools to do what we want! To keep things clear, we’ll place the output of the TypeScript compiler into a tsc folder, then we’ll feed that as input into Babel. The output of Babel will go into a babel folder. We’ll then use Webpack to consume the contents of the Babel folder and put it in a dist folder (this is what we’d actually serve up to a client browser).

TypeScript

Keeping this as simple as possible (there are plenty of options to play with), the two big decisions are what to use as the target and module version to use. Fortunately, we don’t really have to care too much, it just has to be consumable by Babel. To get all the features possible (mmm, delicious features), we can target e.g. ES2017, and use commonjs.

{
  "compilerOptions": {
    "target": "es2017",      /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', or 'ESNEXT'. */
    "module": "commonjs",    /* Specify module code generation: 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
    "rootDir": "./src",      /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
    "outDir": "./build-tsc", /* Redirect output structure to the directory. */
  }
}

Babel

Again, doing as little as possible, we’ll tell Babel to do whatever it needs to do to target apparently 95% of user’s browsers. For some reason, Babel does not support setting the output directory in the configuration file (see options here), it has to be passed as an argument to the invocation of Babel.

{
  "presets": [
    ["env", {
      "targets": {
        "browsers": ["last 2 versions", "safari >= 7"]
      }
    }]
  ]
}

Webpack

Likewise, for the start Webpack doesn’t have to be that complicated. We’ll include source maps here, don’t feel obliged to do so though.

const path = require('path');

module.exports = {
  devtool: "source-map",
  entry: './build-babel/index.js',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist')
  }
};

package.json

To avoid having to remember anything, a few scripts in package.json can be useful. Breaking them out so it’s clear what step is doing what, it could look like this:

"scripts": {
  "clean": "yarn run clean-build-steps && rm -rf dist",
  "tsc": "./node_modules/.bin/tsc",
  "babel": "./node_modules/.bin/babel build-tsc --out-dir build-babel --source-maps",
  "webpack": "webpack && cp public/* dist",
  "clean-build-steps": "rm -rf build-tsc build-babel",
  "build": "yarn run clean && yarn run tsc && yarn run babel && yarn run webpack && yarn run clean-build-steps"
}

Build

Running yarn build (after the initial install) will:

  1. Clean anything from previous executions of the script
    1. This includes any leftover build artifacts, as well as the dist directory
  2. Use the TypeScript compiler to take everything from the src directory, transpile it to ES2017 JavaScript, and output it into the build-tsc directory
  3. Use Babel to convert everything in the build-tsc directory from ES2017 to ES2015 and output it into build-babel
  4. Use Webpack:
    1. Look in the build-babel folder
    2. Find index.js
    3. Parse index.js as an entrypoint, and resolve dependencies
    4. Add everything needed into one big bundle.js
  5. Create the “deployable” directory
    1. Copy the static HTML into the dist directory
    2. Copy the bundle.js into the dist directory

Serve

With something like http-server and serving the dist directory, we can see the product of our work!

$ http-server dist
Starting up http-server, serving dist
Available on:
  http://127.0.0.1:8080
  http://10.0.2.15:8080
  http://172.17.0.1:8080
Hit CTRL-C to stop the server

See the GitHub repository here and the deployed example here.

Automatically move downloaded torrents to remote machine

Setting up users for file transfer

The Transmission installation creates a debian-transmission user and group to run the daemon. It’s done this way to limit the risks if someone gains access to the user (through a Transmission bug, for example). This means the debian-transmission user is going to be the one executing the post-download script. The only way I’m aware of for transferring files to another machine while maintaining the restricted nature of the user is to create a similarly minimally priviledged user on the remote system, as the recipient of the files.

Assuming you’re using debian-transmission and you’ve created a corresponding user on the other machine – we’ll call them remote-user, you’ll want to set up an SSH key pair with the remote machine. For me, that was 192.168.1.20

$ sudo mkdir /var/lib/transmission-daemon/.ssh
$ sudo ssh-keygen -f /var/lib/transmission-daemon/.ssh/id_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/transmission-daemon/.ssh/id_rsa.
Your public key has been saved in /var/lib/transmission-daemon/.ssh/id_rsa.pub.
...
$ ssh-copy-id -i /var/lib/transmission-daemon/.ssh/id_rsa remote-user@192.168.1.20
...

Now, you need to do a little dance to get the known_hosts folder to be populated. I don’t know of a better way to do this, but here’s what I did:

$ sudo su
# ssh-keyscan 192.168.1.80 >>/var/lib/transmission-daemon/.ssh/known_hosts
...
# exit

Then change the permissions so that debian-transmission owns everything.

$ sudo chown -R debian-transmission:debian-transmission /var/lib/transmission-daemon/

Post-torrent-download script

Create a script, and put it anywhere you’d like. I put mine in /usr/local/bin/after-torrent-downloaded.sh

$ sudo touch /usr/local/bin/after-torrent-downloaded.sh
$ sudo chown debian-transmission:debian-transmission after-torrent-downloaded.sh
$ sudo chmod +x after-torrent-downloaded.sh

For our purposes, there are two important environment variables transmission exposes (see https://trac.transmissionbt.com/wiki/Scripts) TR_TORRENT_DIR – the absolute directory path and TR_TORRENT_NAME – the torrent’s name. With all this done, the script is completely trivial. This is mine:

USERNAME=remote-user
HOST=192.168.1.20
TARGET_DIRECTORY=/home/remote-user/files

scp -r "$TR_TORRENT_DIR/$TR_TORRENT_NAME" $USERNAME@$HOST:"$TARGET_DIRECTORY"

Note: This relies on the target directory (/home/remote-user/files) already existing – if it doesn’t, make it.

Transmission configuration

Note: The client should be closed before making changes, otherwise settings will be reverted to it’s previous state.

First thing first, find out where the configuration file you’re going to be change is, see the transmission wiki. For me it was in the /var/lib/transmission-daemon/.config folder. In terms of changes to be made, there’s another wiki page. The settings.json is the one we need, and there are only two values we need to worry about.

$ sudo nano /var/lib/transmission-daemon/.config/transmission-daemon/settings.json

Change "script-torrent-done-enabled": false, to "script-torrent-done-enabled": true,

Change "script-torrent-done-filename": "" to "script-torrent-done-filename": "/usr/local/bin/after-torrent-downloaded.sh" or whatever the path is to your script.

Save settings.json and make Transmission respect your changes with:

$ killall -HUP transmission-daemon

That’s all there is to it!

Try downloading a torrent, and when it’s completed take a look at the Transmission logs:

$ sudo journalctl -u transmission-daemon.service

Every time a torrent finishes, it should be copied to the configured remote server.

Email with Gmail, NodeJS, and OAuth2

If you look around for examples of how to send an email via Gmail with NodeJS, they generally end up mentioning you should flip the toggle to Allow less secure apps:

Screenshot of Gmail Less secure apps setting page

This doesn’t seem like a good idea – I mean it SAYS “less secure”. I looked around at the documentation, and while Google has tons of documentation, I found it a bit overwhelming. As promised, the NodeJS quickstart is a great place. It shows how to set up a client to authenticate with Google in the “more secure” fashion. I’ll go through that quickstart here, with a couple tweaks to send email.

First things first, install the necessary dependencies:

yarn add google-auth-library googleapis js-base64

Then steal most of the quickstart.js, swapping out enough to send an email. Note that this is my first time ever interacting with the Gmail API, so while this worked to send an email for me, no guarantees…

Pull in all the dependencies:

const fs = require('fs');
const readline = require('readline');
const google = require('googleapis');
const googleAuth = require('google-auth-library');
const Base64 = require('js-base64').Base64;

Choose the appropriate Auth Scopes for what you’re trying to accomplish:

const SCOPES = ['https://mail.google.com/',
  'https://www.googleapis.com/auth/gmail.modify',
  'https://www.googleapis.com/auth/gmail.compose',
  'https://www.googleapis.com/auth/gmail.send'
];

Define where you’re going to store the auth token once you get it:

const TOKEN_DIR = (process.env.HOME || process.env.HOMEPATH ||
  process.env.USERPROFILE) + '/.credentials/';
const TOKEN_PATH = TOKEN_DIR + 'gmail-nodejs-quickstart.json';

First, we’ll want to read the client secret that was created in the manual set up phase.

/**
 * Read the contents of the client secret JSON file
 * 
 * @param {String} filename - name of the file containing the client secrets
 */
function readClientSecret(filename) {
  return new Promise((resolve, reject) => {
    fs.readFile(filename, (err, content) => {
      if (err) {
        return reject('Error loading client secret from ' + filename +
          ' due to ' + err);
      }
      return resolve(content);
    });
  });
}

Then after parsing that JSON file, we’ll want to build the Google’s OAuth2 client, as they’re nice and provide one for us.

/**
 * Create an OAuth2 client with the given credentials
 *
 * @param {Object} credentials The authorization client credentials.
 */
function authorize(credentials) {
  let clientSecret = credentials.installed.client_secret;
  let clientId = credentials.installed.client_id;
  let redirectUrl = credentials.installed.redirect_uris[0];
  let auth = new googleAuth();
  let oauth2Client = new auth.OAuth2(clientId, clientSecret, redirectUrl);

  return new Promise((resolve, reject) => {
    // Try reading the existing token
    fs.readFile(TOKEN_PATH, function (err, token) {
      if (err) {
        // If there isn't an existing token, get a new one
        resolve(getNewToken(oauth2Client));
      } else {
        oauth2Client.credentials = JSON.parse(token);
        resolve(oauth2Client);
      }
    });
  });
}

If this is the first time executing the program, or you’ve deleted the cached token, you’ll need to get a new one.

/**
 * Get and store new token after prompting for user authorization, then return
 * authorized OAuth2 client.
 *
 * @param {google.auth.OAuth2} oauth2Client The OAuth2 client to get token for.
 */
function getNewToken(oauth2Client) {
  let authUrl = oauth2Client.generateAuthUrl({
    access_type: 'offline',
    scope: SCOPES
  });

  console.log('Authorize this app by visiting this url: ', authUrl);

  let readlineInterface = readline.createInterface({
    input: process.stdin,
    output: process.stdout
  });

  return new Promise((resolve, reject) => {
    readlineInterface.question('Enter the code from that page here: ',
      (code) => {
        readlineInterface.close();
        oauth2Client.getToken(code, (err, token) => {
          if (err) {
            return reject('Error while trying to retrieve access token', err);
          }

          oauth2Client.credentials = token;
          storeToken(token);
          return resolve(oauth2Client);
        });
      });
  });
}

To avoid having to do this on every call, it makes sense to write it out to the disk.

/**
 * Store token to disk be used in later program executions.
 *
 * @param {Object} token The token to store to disk.
 */
function storeToken(token) {
  try {
    fs.mkdirSync(TOKEN_DIR);
  } catch (err) {
    if (err.code != 'EEXIST') {
      throw err;
    }
  }
  fs.writeFile(TOKEN_PATH, JSON.stringify(token));
  console.log('Token stored to ' + TOKEN_PATH);
}

At this point, our OAuth2 client is authenticated and ready to role! If we’ve set up the Auth Scopes properly, our client should also be authorized to do whatever we want it to do. There are a handful of libraries that make this easier, but for simplicity’s sake we’ll just hand roll an email string.

/**
 * Build an email as an RFC 5322 formatted, Base64 encoded string
 * 
 * @param {String} to email address of the receiver
 * @param {String} from email address of the sender
 * @param {String} subject email subject
 * @param {String} message body of the email message
 */
function createEmail(to, from, subject, message) {
  let email = ["Content-Type: text/plain; charset=\"UTF-8\"\n",
    "MIME-Version: 1.0\n",
    "Content-Transfer-Encoding: 7bit\n",
    "to: ", to, "\n",
    "from: ", from, "\n",
    "subject: ", subject, "\n\n",
    message
  ].join('');

  return Base64.encodeURI(email);
}

Then the actual magic! Using our authenticated client and our formatted email to send the email. I’m not positive on this part, as I didn’t find a specific example that did it exactly as I was expecting (I also didn’t look too hard…)

/**
 * Send Message.
 *
 * @param  {String} userId User's email address. The special value 'me'
 * can be used to indicate the authenticated user.
 * @param  {String} email RFC 5322 formatted, Base64 encoded string.
 * @param {google.auth.OAuth2} oauth2Client The authorized OAuth2 client
 */
function sendMessage(email, oauth2Client) {
  google.oauth2("v2").google.gmail('v1').users.messages.send({
    auth: oauth2Client,
    userId: 'me',
    'resource': {
      'raw': email
    }
  });
}

Then it’s just a matter of stringing everything together. The invocation part of the script:

let to = 'mmonroe@gmail.com';
let from = 'ckent@gmail.com';
let subject = 'Email subject generated with NodeJS';
let message = 'Big long email body that has lots of interesting content';

readClientSecret('client_secret.json')
  .then(clientSecretJson => {
    let clientSecret = JSON.parse(clientSecretJson);
    return authorize(clientSecret);
  }).then(oauth2client => {
    let email = createEmail(to, from, subject, message);
    sendMessage(email, oauth2client);
  }).catch(error => {
    console.error(error);
  });

And that’s all. Executing this the first time prompts for the value shown in the output URL then sends the email, executing it subsequent times just sends the email. Easy enough!

 

Not quite unit testing ExpressJS

Everything is awful

I tried. First things first, Node isn’t running in the browser. The vast majority of modern JavaScript testing tools open up a browser. Of course this makes sense, but what a pain! If I see one more article about some web developer somewhere writing what’s effectively a unit test but doing it with a browser, I’m going to explode. “But what about PhantomJS?” you might say. PhantomJS is the answer to an entirely different question (also the project is over now that Chrome and Firefox are going headless). Server side code that only ever runs with Node should not be tested by a browser. The amount of complication and overhead being introduced is insane.

What are the options?

Here are some links:

Mocha, Jasmine, Karma, Tape, Sinon, Jest,  As with everything else in the modern JavaScript ecosystem, there are a million and a half options that are all considered to be anti-patterns by one person or another. There are many articles by enlightened people who suggest using Tape instead of one of those bloated, new-agey frameworks. I was actually pretty convinced by the article, it hit a lot of points that resonated with me. Then I used it. Oh, you want to use TypeScript? Maybe some promises or other asynchronous things? How about creating a test structure that allows you to organize in a meaningful way? Now that you’re a minimalist you can do all that work yourself!

Everyone is awful

What slays me about articles like that are comments like these:

Think you’ll miss automagic test parallelization? I keep tests for different modules in different files. It takes about five minutes to write a little wrapper that will fire up workers across all your machine cores and zip through them in parallel.

and this…

Before/After/BeforeEach/AfterEach

You don’t need these. They’re bad for your test suite. Really. I have seen these abused to share global state far too often. Try this, instead:

*sigh*. People will always abuse stuff. Abandoning functionality because it can be misused should be carefully considered. Code reuse is a spectrum. There are people who pull giant libraries just to use a 3 line function, and they get shamed for it (jQuery anyone?). In my experience, the other end of that spectrum is just as bad. I’ve seen multiple implementations of HTML sanitizers. WHY?! It’s going to suck and be wrong and probably open you up to some code injection. Someone else has done it better.

What to do?

First, one of the things I realized was SUPER important about this effort is being mindful of what you’re trying to test. I found myself too often most of the way through writing a test and discovering I was really just testing the router or some other provided piece of functionality. I tried a few strategies and it all seemed like a bit of a mess. There’s a blog post here with some information, but I don’t really buy it all.

The most effective strategy I found was really framework independent. Take a look through focusaurus’ repository here – and I mean really look at it. While I didn’t follow the suggestions there 100%, by incorporating some of the ideas I found I was able to break most of my application into plain old JavaScript.

Which framework should I use?

  • Purely Node? Try tape
  • Angular? You’ll likely be using Mocha and Jasmine
  • React? Jest looks like its the way to go

Every testing framework I’ve ever used has been a bit of a pain. Tests end up extremely coupled to the framework itself. Once the application has been decomposed nicely, I honestly don’t think frameworks make much difference. Read the examples, pick the one you like the style of most, then deal with all the ways it kind of sucks.

nginx for Node application deployment

What is nginx

“NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server.”.

Why use it

For me, the biggest motivation is as an SSL termination point. With LetsEncrypt offering free certificates, SSL is a no-brainer, even for weekend projects. It sucks setting it up over and over and over again (even though it’s WAY easier than a couple years ago). I have one domain that I use as a staging area while I’m playing around with various ideas. Rather than set up SSL certificates for the various servers running on that machine, nginx is the SSL termination point which then proxies the requests.

Additionally, nginx is a super fast and simple static file server. If you’re just messing around with this week’s SPA framework, you don’t need an application server at all. This means most of your front end can be delivered as fast as possible, and fast is usually a good thing.

How to use it

First and foremost, there’s a great DigitalOcean tutorial here about deploying NodeJS servers.

One of the common use cases I find myself in is the following:

  • I have a client that I want served (HTML, JavaScript, CSS etc)
  • I have a server that provides an API, usually exclusively intended for the client
  • I don’t really want the client and the server to be directly aware of one-another

With that in mind, for the hostname example.ca I would create /etc/nginx/sites-available/example.ca with something like the following content:

What does this mean?

For the most part, I have no idea. I imagine I copy/pasted it off a StackOverflow answer at some point. I’ll look it up when I have time.

nginx will be listening on port 80, and if someone comes knocking at the root path / it will try and serve up the client files that live in /home/toby/client (favicon, HTML, css, JavaScript etc).

If someone makes a request prefixed by /api/, the request is rewritten to strip off that prefix (so https://example.ca/api/endpoint/test would turn into http://127.0.0.1:3000/endpoint/test) and proxied off to an Express server that is running on port 3000 (in this example).

Line 27 and on are all EFF’s Certbot‘s doing.

Is it NGINX, nginx, or Nginx?

I think nginx is the original form [citation needed], Nginx Inc. is the name of the company that backs it, and NGINX is the logo and modern “marketing” name. From that perspective, the software is likely most correctly referred to as NGINX, but practically speaking my impression is most people use nginx. Probably could be clearer…

Authentication and Authoriazation with Express PostGraphQL Server

Contents

For weekend projects, I generally want to get up and running as quick as possible. One constant for almost all web applications is authentication, authorization, and me not really wanting to deal with either. It always seems like a pain! While trying t02o still stay in the realm of learning new things, I figured I’d give PostGraphQL a shot. Sticking with technologies I’ve previously used, I’ll use PostGraphQL as middleware with Express, and run PostgreSQL in Docker.

Note: This is essentially a reimplementation of the wonderful PostGraphQL tutorial here.
All code is available here.

Setting up Docker

First things first, I’ll need an actual database server. My go to for this is Docker, as it’s easy to manage many instances, easy to scrap and start fresh, and easy to validate my provisioning works as expected.  With Docker installed, it’s a simple

docker run --restart=always -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=password -d postgres:alpine

Basic database configuration

Before getting into tables and functions, I’ll need a database instance on the server. Doing this in psql is my go to.

 CREATE DATABASE auth;

Then connect to it

 \c auth

I’m going to be encrypting passwords, so I’ll add the pgcrypto extension. I’m also going to be dealing with email addresses, and for sanity’s sake I’m going to treat the entire address as case insensitive. I know that’s not technically accurate, but it’s a usability nightmare otherwise. To do so, I’ll enable the citext extension.

CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "citext";

Both PostGraphql and PostgREST use schemas (or schemata if you’re that kind of person) to scope entities. There’s some good reading about it here. The simplest setup is to have one public schema, which will turn into the API, and one private schema, which will be kept “secret”.

CREATE SCHEMA auth_public; 
CREATE SCHEMA auth_private;

Following PostgREST some more, there are 3 main roles (when using row level security) – unauthenticated/anonymous, authenticated, and the role used by the actual framework itself (I’ve called it auth_postgraphql). The roles used by the framework should be able to access everything from both other roles.

CREATE ROLE auth_postgraphql LOGIN PASSWORD 'password'; 

CREATE ROLE auth_anonymous; 
GRANT auth_anonymous TO auth_postgraphql; 

CREATE ROLE auth_authenticated; 
GRANT auth_authenticated TO auth_postgraphql;

Schema design

Tables

Now the actual schema. For this seed project, I’m going to keep it about as minimal as possible while still allowing for authorization.

schema.png

Users have firstnames, last names, and unique IDs, privately they also have an email address (this is their username) and a password.

Creating these two tables in their respective schemas:

CREATE TABLE auth_public.user ( 
  id              serial primary key, 
  first_name      text not null check (char_length(first_name) < 80), 
  last_name       text check (char_length(last_name) < 80), 
  created_at      timestamp default now() 
);
CREATE TABLE auth_private.user_account ( 
  user_id         integer primary key references auth_public.user(id) on delete cascade, 
  email           citext not null unique, 
  password_hash   text not null 
);

Authorization

PostGraphQL makes authorization pretty straightforward by delegating it to the database. PostgreSQL has Row-Level Security (as of 9.5), which means a naive implementation of authorization is to restrict users by only letting them modify their own rows (where their id matches the id of the row).

Enable RLS on the user table:

ALTER TABLE auth_public.user ENABLE ROW LEVEL SECURITY;

And set policies so users can interact with their own rows. Everyone (unauthenticated included) will be able to query the table, but only authenticated users will be able to update or delete entries, and only their own.

CREATE POLICY select_user ON auth_public.user FOR SELECT
  using(true);

CREATE POLICY update_user ON auth_public.user FOR UPDATE TO auth_authenticated 
  using (id = current_setting('jwt.claims.user_id')::integer); 

CREATE POLICY delete_user ON auth_public.user FOR DELETE TO auth_authenticated 
  using (id = current_setting('jwt.claims.user_id')::integer);

JWT for authentication

Before going any further, I have enough information to be able to create the type I’ll be using for my JWT. Keeping this simple, it will have role for authentication and user_id for authorization.

CREATE TYPE auth_public.jwt as ( 
  role    text, 
  user_id integer 
);

Functions

I’ll create 3 functions:

  1. register a new user
  2. authenticate that user with a provided email and password
  3. show who the current user is
CREATE FUNCTION auth_public.register_user( 
  first_name  text, 
  last_name   text, 
  email       text, 
  password    text 
) RETURNS auth_public.user AS $$ 
DECLARE 
  new_user auth_public.user; 
BEGIN 
  INSERT INTO auth_public.user (first_name, last_name) values 
    (first_name, last_name) 
    returning * INTO new_user; 
    
  INSERT INTO auth_private.user_account (user_id, email, password_hash) values 
    (new_user.id, email, crypt(password, gen_salt('bf'))); 
    
  return new_user; 
END; 
$$ language plpgsql strict security definer;
CREATE FUNCTION auth_public.authenticate ( 
  email text, 
  password text 
) returns auth_public.jwt as $$ 
DECLARE 
  account auth_private.user_account; 
BEGIN 
  SELECT a.* INTO account 
  FROM auth_private.user_account as a 
  WHERE a.email = $1; 

  if account.password_hash = crypt(password, account.password_hash) then 
    return ('auth_authenticated', account.user_id)::auth_public.jwt; 
  else 
    return null; 
  end if; 
END; 
$$ language plpgsql strict security definer;
CREATE FUNCTION auth_public.current_user() RETURNS auth_public.user AS $$
SELECT *
FROM auth_public.user
WHERE id = current_setting('jwt.claims.user_id')::integer
$$ language sql stable;

Permissions

Everything I need for this seed project is defined now, so time to sort out the permissions for the various roles.

GRANT USAGE ON SCHEMA auth_public TO auth_anonymous, auth_authenticated; 
GRANT SELECT ON TABLE auth_public.user TO auth_anonymous, auth_authenticated; 
GRANT UPDATE, DELETE ON TABLE auth_public.user TO auth_authenticated; 
GRANT EXECUTE ON FUNCTION auth_public.authenticate(text, text) TO auth_anonymous, auth_authenticated; 
GRANT EXECUTE ON FUNCTION auth_public.register_user(text, text, text, text) TO auth_anonymous; 
GRANT EXECUTE ON FUNCTION auth_public.current_user() TO auth_anonymous, auth_authenticated;

Set up server

Create a regular ol’ Express server with dotenv to ensure our environment specific details don’t leak out.

$ yarn init
yarn init v0.24.6
question name (auth-server):
question version (1.0.0):
question description:
question entry point (index.js):
question repository url:
question author:
question license (MIT):
success Saved package.json
Done in 2.82s.
$ yarn add express dotenv
yarn add v0.24.6
info No lockfile found.
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
success Saved 43 new dependencies.
... (snip all the dependencies) ...
$ touch index.js .env

Now set up a barebones Express server, e.g.:

require('dotenv').config();
const express = require('express');

app.use(function (req, res, next) {
  var err = new Error('Not Found');
  err.status = 404;
  next(err);
});

app.use(function (err, req, res, next) {
  res.send('Error! ', err.message, ' ', (req.app.get('env') === 'development' ? err : {}));
});

app.listen(process.env.PORT);

Make sure you’ve added PORT to your .env file and set it appropriately – e.g. PORT=3000

Integrate PostGraphQL middleware

This part is arguably slightly more interesting, but still just using configuration to wire things together:

Interactive testing

Database level

Before getting any application level concerns involved, I like to test everything works on the database level. I’ll register a user (using the function), make sure it populates both the public and private table, intentionally fail authenticating against it, attempt to successfully athenticate against it, then finally clean up the user.

auth=# SELECT auth_public.register_user ('firstname', 'lastname', 'email', 'password');
                    register_user                    
-----------------------------------------------------
 (1,firstname,lastname,"2017-06-11 04:05:39.216743")
(1 row)
auth=# SELECT *
FROM auth_public.user
JOIN auth_private.user_account
  ON auth_public.user.id = auth_private.user_account.user_id
;
auth-# ;
 id | first_name | last_name |         created_at         | user_id | email |                        password_hash                         
----+------------+-----------+----------------------------+---------+-------+--------------------------------------------------------------
  1 | firstname  | lastname  | 2017-06-11 04:05:39.216743 |       1 | email | $2a$06$PZ9NUmYpgDjk8QJuDwah.OJSt/Quo53Qzkddc5ccOSYpuzYXdfYJO
(1 row)
auth=# SELECT auth_public.authenticate('email', 'wrong-password');
 authenticate 
--------------
 
(1 row)
auth=# SELECT auth_public.authenticate('email', 'password');
 authenticate 
--------------
 (user,1)
(1 row)
auth=# DELETE FROM auth_public.user;
DELETE 1

GraphiQL level

  1. Navigate to GraphiQL the port you’ve configured (3000 by default)
    – e.g. http://localhost:3000/graphiql

Create a user

  1. Register a user via GraphQL mutation
mutation {
  registerUser(input: {
    firstName: "Genghis"
    lastName: "Khan"
    email: "Genghis@khan.mn"
    password: "Genghis1162"
  }) {
    user {
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Observe the response
{
  "data": {
    "registerUser": {
      "user": {
        "id": 2,
        "firstName": "Genghis",
        "lastName": "Khan",
        "createdAt": "2017-06-11T06:17:39.084578"
      }
    }
  }
}

Observe authentication working

  1. Try authenticating with a GraphQL mutation
mutation {
  authenticate(input: {
    email: "Genghis@khan.mn"
    password: "Genghis1162"
  }) {
    jwt 
  }
}
  1. Observe the response
{
  "data": {
    "authenticate": {
      "jwt": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aF9hdXRoZW50aWNhdGVkIiwidXNlcl9pZCI6MiwiaWF0IjoxNDk3MTYyMTIyLCJleHAiOjE0OTcyNDg1MjIsImF1ZCI6InBvc3RncmFwaHFsIiwiaXNzIjoicG9zdGdyYXBocWwifQ.hLZ7p3vJs3UYW9IKB7u8tbXONUl_tZoWhiAAD1-OPQg"
    }
  }
}

Try making an unauthenticated request when authentication is necessary

  1. currentUser is protected, so query that
query {
  currentUser{
    id
    firstName
    lastName
    createdAt
  }
}
  1. Observe the not-particularly-friendly response
{
  "errors": [
    {
      "message": "unrecognized configuration parameter \"jwt.claims.user_id\"",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "path": [
        "currentUser"
      ]
    }
  ],
  "data": {
    "currentUser": null
  }
}

Try making an authenticated request when authentication is necessary

  1. You’ll need the ability to send your JWT to the server, which unfortunately isn’t possible with vanilla GraphiQL
  1. Set an authorization header by copy/pasting the value out of the `jwt` field in the `authenticate` response in step 5.
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aF9hdXRoZW50aWNhdGVkIiwidXNlcl9pZCI6MSwiaWF0IjoxNDk3MTYwNzA3LCJleHAiOjE0OTcyNDcxMDcsImF1ZCI6InBvc3RncmFwaHFsIiwiaXNzIjoicG9zdGdyYXBocWwifQ.aInZvEVhhDfi9yQDWRzvmSaE7Mk2PufbBrY3rxGlEt8
  • Don’t forget the Bearer on the right side of the header, otherwise you’ll likely see Authorization header is not of the correct bearer scheme format.
  1. Submit the query with the authorization header attached
query {
  currentUser{
    nodeId
    id
    firstName
    lastName
    createdAt
  }
}
  1. Observe your now successful response
{
  "data": {
    "currentUser": {
      "nodeId": "WyJ1c2VycyIsMl0=",
      "id": 2,
      "firstName": "Genghis",
      "lastName": "Khan",
      "createdAt": "2017-06-11T06:17:39.084578"
    }
  }
}

Observe authorization working

  1. With the authorization header set, try updating Genghis
mutation {
  updateUser(input: {
    nodeId: "WyJ1c2VycyIsMl0="
    userPatch: {
      lastName: "NotKhan"
    }
  }) {
    user {
      nodeId
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Observe that it works:
{
  "data": {
    "updateUser": {
      "user": {
        "nodeId": "WyJ1c2VycyIsMl0=",
        "id": 2,
        "firstName": "Ghengis",
        "lastName": "NotKhan",
        "createdAt": "2017-06-11T06:17:39.084578"
      }
    }
  }
}
  1. Add a friend
mutation {
  registerUser(input: {
    firstName: "Serena"
    lastName: "Williams"
    email: "Serena@Williams.ca"
    password: "NotGhengis"
  }) {
    user {
      nodeId
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Keeping Genghis’ JWT, try modifying your friend
  • Note this is Serena’s nodeId
mutation {
  updateUser(input: {
    nodeId: "WyJ1c2VycyIsM10="
    userPatch: {
      lastName: "KhanMaybe?"
    }
  }) {
    user {
      nodeId
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Get rejected
{
  "errors": [
    {
      "message": "No values were updated in collection 'users' using key 'id' because no values were found.",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "path": [
        "updateUser"
      ]
    }
  ],
  "data": {
    "updateUser": null
  }
}

Conclusion

That’s it – the bulk of an application that has authentication and authorization already sorted. Missing a front end, but at least there’s GraphiQL?

Code is availabe here.

Delete non-empty folder over FTP with JavaScript

Xplornet, my ISP, offers 50 MB of free hosting. More interestingly, it’s a public hostname that I don’t have to worry about setting up or maintaining! While mucking around with JavaScript front end frameworks, I figured that’d be a neat place to drop my work. Right off the bat I was a little bit wary of the offering, as 50 MB is a weirdly tiny amount of space, which likely means Xplornet has no idea what they’re doing when it comes to web hosting. Upon signing up and receiving my concerningly short password in plaintext, which I also don’t seem to be able to change, I figured I was bang on with my suspicions. Nothing private going on that server, that’s for sure. Regardless, I figured I’d give it a try and see what I could do with it.

It looks like the only way to get files to the server is via FTP (of course no SFTP). Xplornet gave me an account with access to a less than ideal front end for loading up files, but that is all GUI driven and generally painful. I want to deploy my front end automatically, so time to figure out a little FTP.

FTP in JavaScript

I’m presently trying to develop my JavaScript skills, so I figured I’d start writing up a deployment script in JS. So far as I can tell, the FTP ecosystem is pretty sparse in the JavaScript world. I landed on jsftp as the library for backing the interaction, but it’s a bit rough around the edges. I figured I’d keep the script simple: delete everything in /public and replace it with the output of my build. I immediately tripped over FTP… I don’t really know anything about it, beyond using clients to move files around every now and then, but it appears as though deleting a non-empty file is difficult or maybe impossible?

For lack of rm -rf

When I can’t delete non-empty directories, the only other solution I can think of is to walk the directory and delete everything from the bottom up. So that’s what I decided to do! Note that this is literally the first iteration that managed to delete all the files in one shot, so hold off on the judgement… If I have spare time I might bundle it up into a NPM module, we’ll see how it goes.

And that managed to do it. The next piece will be dumping the build output on the server, and then I should be good to go to actually use those 50 MB…