Category Archives: Uncategorized

Automatically move downloaded torrents to remote machine

Setting up users for file transfer

The Transmission installation creates a debian-transmission user and group to run the daemon. It’s done this way to limit the risks if someone gains access to the user (through a Transmission bug, for example). This means the debian-transmission user is going to be the one executing the post-download script. The only way I’m aware of for transferring files to another machine while maintaining the restricted nature of the user is to create a similarly minimally priviledged user on the remote system, as the recipient of the files.

Assuming you’re using debian-transmission and you’ve created a corresponding user on the other machine – we’ll call them remote-user, you’ll want to set up an SSH key pair with the remote machine. For me, that was 192.168.1.20

$ sudo mkdir /var/lib/transmission-daemon/.ssh
$ sudo ssh-keygen -f /var/lib/transmission-daemon/.ssh/id_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/transmission-daemon/.ssh/id_rsa.
Your public key has been saved in /var/lib/transmission-daemon/.ssh/id_rsa.pub.
...
$ ssh-copy-id -i /var/lib/transmission-daemon/.ssh/id_rsa remote-user@192.168.1.20
...

Now, you need to do a little dance to get the known_hosts folder to be populated. I don’t know of a better way to do this, but here’s what I did:

$ sudo su
# ssh-keyscan 192.168.1.80 >>/var/lib/transmission-daemon/.ssh/known_hosts
...
# exit

Then change the permissions so that debian-transmission owns everything.

$ sudo chown -R debian-transmission:debian-transmission /var/lib/transmission-daemon/

Post-torrent-download script

Create a script, and put it anywhere you’d like. I put mine in /usr/local/bin/after-torrent-downloaded.sh

$ sudo touch /usr/local/bin/after-torrent-downloaded.sh
$ sudo chown debian-transmission:debian-transmission after-torrent-downloaded.sh
$ sudo chmod +x after-torrent-downloaded.sh

For our purposes, there are two important environment variables transmission exposes (see https://trac.transmissionbt.com/wiki/Scripts) TR_TORRENT_DIR – the absolute directory path and TR_TORRENT_NAME – the torrent’s name. With all this done, the script is completely trivial. This is mine:

USERNAME=remote-user
HOST=192.168.1.20
TARGET_DIRECTORY=/home/remote-user/files

scp -r "$TR_TORRENT_DIR/$TR_TORRENT_NAME" $USERNAME@$HOST:"$TARGET_DIRECTORY"

Note: This relies on the target directory (/home/remote-user/files) already existing – if it doesn’t, make it.

Transmission configuration

Note: The client should be closed before making changes, otherwise settings will be reverted to it’s previous state.

First thing first, find out where the configuration file you’re going to be change is, see the transmission wiki. For me it was in the /var/lib/transmission-daemon/.config folder. In terms of changes to be made, there’s another wiki page. The settings.json is the one we need, and there are only two values we need to worry about.

$ sudo nano /var/lib/transmission-daemon/.config/transmission-daemon/settings.json

Change "script-torrent-done-enabled": false, to "script-torrent-done-enabled": true,

Change "script-torrent-done-filename": "" to "script-torrent-done-filename": "/usr/local/bin/after-torrent-downloaded.sh" or whatever the path is to your script.

Save settings.json and make Transmission respect your changes with:

$ killall -HUP transmission-daemon

That’s all there is to it!

Try downloading a torrent, and when it’s completed take a look at the Transmission logs:

$ sudo journalctl -u transmission-daemon.service

Every time a torrent finishes, it should be copied to the configured remote server.

Advertisements

Email with Gmail, NodeJS, and OAuth2

If you look around for examples of how to send an email via Gmail with NodeJS, they generally end up mentioning you should flip the toggle to Allow less secure apps:

Screenshot of Gmail Less secure apps setting page

This doesn’t seem like a good idea – I mean it SAYS “less secure”. I looked around at the documentation, and while Google has tons of documentation, I found it a bit overwhelming. As promised, the NodeJS quickstart is a great place. It shows how to set up a client to authenticate with Google in the “more secure” fashion. I’ll go through that quickstart here, with a couple tweaks to send email.

First things first, install the necessary dependencies:

yarn add google-auth-library googleapis js-base64

Then steal most of the quickstart.js, swapping out enough to send an email. Note that this is my first time ever interacting with the Gmail API, so while this worked to send an email for me, no guarantees…

Pull in all the dependencies:

const fs = require('fs');
const readline = require('readline');
const google = require('googleapis');
const googleAuth = require('google-auth-library');
const Base64 = require('js-base64').Base64;

Choose the appropriate Auth Scopes for what you’re trying to accomplish:

const SCOPES = ['https://mail.google.com/',
  'https://www.googleapis.com/auth/gmail.modify',
  'https://www.googleapis.com/auth/gmail.compose',
  'https://www.googleapis.com/auth/gmail.send'
];

Define where you’re going to store the auth token once you get it:

const TOKEN_DIR = (process.env.HOME || process.env.HOMEPATH ||
  process.env.USERPROFILE) + '/.credentials/';
const TOKEN_PATH = TOKEN_DIR + 'gmail-nodejs-quickstart.json';

First, we’ll want to read the client secret that was created in the manual set up phase.

/**
 * Read the contents of the client secret JSON file
 * 
 * @param {String} filename - name of the file containing the client secrets
 */
function readClientSecret(filename) {
  return new Promise((resolve, reject) => {
    fs.readFile(filename, (err, content) => {
      if (err) {
        return reject('Error loading client secret from ' + filename +
          ' due to ' + err);
      }
      return resolve(content);
    });
  });
}

Then after parsing that JSON file, we’ll want to build the Google’s OAuth2 client, as they’re nice and provide one for us.

/**
 * Create an OAuth2 client with the given credentials
 *
 * @param {Object} credentials The authorization client credentials.
 */
function authorize(credentials) {
  let clientSecret = credentials.installed.client_secret;
  let clientId = credentials.installed.client_id;
  let redirectUrl = credentials.installed.redirect_uris[0];
  let auth = new googleAuth();
  let oauth2Client = new auth.OAuth2(clientId, clientSecret, redirectUrl);

  return new Promise((resolve, reject) => {
    // Try reading the existing token
    fs.readFile(TOKEN_PATH, function (err, token) {
      if (err) {
        // If there isn't an existing token, get a new one
        resolve(getNewToken(oauth2Client));
      } else {
        oauth2Client.credentials = JSON.parse(token);
        resolve(oauth2Client);
      }
    });
  });
}

If this is the first time executing the program, or you’ve deleted the cached token, you’ll need to get a new one.

/**
 * Get and store new token after prompting for user authorization, then return
 * authorized OAuth2 client.
 *
 * @param {google.auth.OAuth2} oauth2Client The OAuth2 client to get token for.
 */
function getNewToken(oauth2Client) {
  let authUrl = oauth2Client.generateAuthUrl({
    access_type: 'offline',
    scope: SCOPES
  });

  console.log('Authorize this app by visiting this url: ', authUrl);

  let readlineInterface = readline.createInterface({
    input: process.stdin,
    output: process.stdout
  });

  return new Promise((resolve, reject) => {
    readlineInterface.question('Enter the code from that page here: ',
      (code) => {
        readlineInterface.close();
        oauth2Client.getToken(code, (err, token) => {
          if (err) {
            return reject('Error while trying to retrieve access token', err);
          }

          oauth2Client.credentials = token;
          storeToken(token);
          return resolve(oauth2Client);
        });
      });
  });
}

To avoid having to do this on every call, it makes sense to write it out to the disk.

/**
 * Store token to disk be used in later program executions.
 *
 * @param {Object} token The token to store to disk.
 */
function storeToken(token) {
  try {
    fs.mkdirSync(TOKEN_DIR);
  } catch (err) {
    if (err.code != 'EEXIST') {
      throw err;
    }
  }
  fs.writeFile(TOKEN_PATH, JSON.stringify(token));
  console.log('Token stored to ' + TOKEN_PATH);
}

At this point, our OAuth2 client is authenticated and ready to role! If we’ve set up the Auth Scopes properly, our client should also be authorized to do whatever we want it to do. There are a handful of libraries that make this easier, but for simplicity’s sake we’ll just hand roll an email string.

/**
 * Build an email as an RFC 5322 formatted, Base64 encoded string
 * 
 * @param {String} to email address of the receiver
 * @param {String} from email address of the sender
 * @param {String} subject email subject
 * @param {String} message body of the email message
 */
function createEmail(to, from, subject, message) {
  let email = ["Content-Type: text/plain; charset=\"UTF-8\"\n",
    "MIME-Version: 1.0\n",
    "Content-Transfer-Encoding: 7bit\n",
    "to: ", to, "\n",
    "from: ", from, "\n",
    "subject: ", subject, "\n\n",
    message
  ].join('');

  return Base64.encodeURI(email);
}

Then the actual magic! Using our authenticated client and our formatted email to send the email. I’m not positive on this part, as I didn’t find a specific example that did it exactly as I was expecting (I also didn’t look too hard…)

/**
 * Send Message.
 *
 * @param  {String} userId User's email address. The special value 'me'
 * can be used to indicate the authenticated user.
 * @param  {String} email RFC 5322 formatted, Base64 encoded string.
 * @param {google.auth.OAuth2} oauth2Client The authorized OAuth2 client
 */
function sendMessage(email, oauth2Client) {
  google.oauth2("v2").google.gmail('v1').users.messages.send({
    auth: oauth2Client,
    userId: 'me',
    'resource': {
      'raw': email
    }
  });
}

Then it’s just a matter of stringing everything together. The invocation part of the script:

let to = 'mmonroe@gmail.com';
let from = 'ckent@gmail.com';
let subject = 'Email subject generated with NodeJS';
let message = 'Big long email body that has lots of interesting content';

readClientSecret('client_secret.json')
  .then(clientSecretJson => {
    let clientSecret = JSON.parse(clientSecretJson);
    return authorize(clientSecret);
  }).then(oauth2client => {
    let email = createEmail(to, from, subject, message);
    sendMessage(email, oauth2client);
  }).catch(error => {
    console.error(error);
  });

And that’s all. Executing this the first time prompts for the value shown in the output URL then sends the email, executing it subsequent times just sends the email. Easy enough!

 

Not quite unit testing ExpressJS

Everything is awful

I tried. First things first, Node isn’t running in the browser. The vast majority of modern JavaScript testing tools open up a browser. Of course this makes sense, but what a pain! If I see one more article about some web developer somewhere writing what’s effectively a unit test but doing it with a browser, I’m going to explode. “But what about PhantomJS?” you might say. PhantomJS is the answer to an entirely different question (also the project is over now that Chrome and Firefox are going headless). Server side code that only ever runs with Node should not be tested by a browser. The amount of complication and overhead being introduced is insane.

What are the options?

Here are some links:

Mocha, Jasmine, Karma, Tape, Sinon, Jest,  As with everything else in the modern JavaScript ecosystem, there are a million and a half options that are all considered to be anti-patterns by one person or another. There are many articles by enlightened people who suggest using Tape instead of one of those bloated, new-agey frameworks. I was actually pretty convinced by the article, it hit a lot of points that resonated with me. Then I used it. Oh, you want to use TypeScript? Maybe some promises or other asynchronous things? How about creating a test structure that allows you to organize in a meaningful way? Now that you’re a minimalist you can do all that work yourself!

Everyone is awful

What slays me about articles like that are comments like these:

Think you’ll miss automagic test parallelization? I keep tests for different modules in different files. It takes about five minutes to write a little wrapper that will fire up workers across all your machine cores and zip through them in parallel.

and this…

Before/After/BeforeEach/AfterEach

You don’t need these. They’re bad for your test suite. Really. I have seen these abused to share global state far too often. Try this, instead:

*sigh*. People will always abuse stuff. Abandoning functionality because it can be misused should be carefully considered. Code reuse is a spectrum. There are people who pull giant libraries just to use a 3 line function, and they get shamed for it (jQuery anyone?). In my experience, the other end of that spectrum is just as bad. I’ve seen multiple implementations of HTML sanitizers. WHY?! It’s going to suck and be wrong and probably open you up to some code injection. Someone else has done it better.

What to do?

First, one of the things I realized was SUPER important about this effort is being mindful of what you’re trying to test. I found myself too often most of the way through writing a test and discovering I was really just testing the router or some other provided piece of functionality. I tried a few strategies and it all seemed like a bit of a mess. There’s a blog post here with some information, but I don’t really buy it all.

The most effective strategy I found was really framework independent. Take a look through focusaurus’ repository here – and I mean really look at it. While I didn’t follow the suggestions there 100%, by incorporating some of the ideas I found I was able to break most of my application into plain old JavaScript.

Which framework should I use?

  • Purely Node? Try tape
  • Angular? You’ll likely be using Mocha and Jasmine
  • React? Jest looks like its the way to go

Every testing framework I’ve ever used has been a bit of a pain. Tests end up extremely coupled to the framework itself. Once the application has been decomposed nicely, I honestly don’t think frameworks make much difference. Read the examples, pick the one you like the style of most, then deal with all the ways it kind of sucks.

nginx for Node application deployment

What is nginx

“NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server.”.

Why use it

For me, the biggest motivation is as an SSL termination point. With LetsEncrypt offering free certificates, SSL is a no-brainer, even for weekend projects. It sucks setting it up over and over and over again (even though it’s WAY easier than a couple years ago). I have one domain that I use as a staging area while I’m playing around with various ideas. Rather than set up SSL certificates for the various servers running on that machine, nginx is the SSL termination point which then proxies the requests.

Additionally, nginx is a super fast and simple static file server. If you’re just messing around with this week’s SPA framework, you don’t need an application server at all. This means most of your front end can be delivered as fast as possible, and fast is usually a good thing.

How to use it

First and foremost, there’s a great DigitalOcean tutorial here about deploying NodeJS servers.

One of the common use cases I find myself in is the following:

  • I have a client that I want served (HTML, JavaScript, CSS etc)
  • I have a server that provides an API, usually exclusively intended for the client
  • I don’t really want the client and the server to be directly aware of one-another

With that in mind, for the hostname example.ca I would create /etc/nginx/sites-available/example.ca with something like the following content:

What does this mean?

For the most part, I have no idea. I imagine I copy/pasted it off a StackOverflow answer at some point. I’ll look it up when I have time.

nginx will be listening on port 80, and if someone comes knocking at the root path / it will try and serve up the client files that live in /home/toby/client (favicon, HTML, css, JavaScript etc).

If someone makes a request prefixed by /api/, the request is rewritten to strip off that prefix (so https://example.ca/api/endpoint/test would turn into http://127.0.0.1:3000/endpoint/test) and proxied off to an Express server that is running on port 3000 (in this example).

Line 27 and on are all EFF’s Certbot‘s doing.

Is it NGINX, nginx, or Nginx?

I think nginx is the original form [citation needed], Nginx Inc. is the name of the company that backs it, and NGINX is the logo and modern “marketing” name. From that perspective, the software is likely most correctly referred to as NGINX, but practically speaking my impression is most people use nginx. Probably could be clearer…

Authentication and Authoriazation with Express PostGraphQL Server

Contents

For weekend projects, I generally want to get up and running as quick as possible. One constant for almost all web applications is authentication, authorization, and me not really wanting to deal with either. It always seems like a pain! While trying t02o still stay in the realm of learning new things, I figured I’d give PostGraphQL a shot. Sticking with technologies I’ve previously used, I’ll use PostGraphQL as middleware with Express, and run PostgreSQL in Docker.

Note: This is essentially a reimplementation of the wonderful PostGraphQL tutorial here.
All code is available here.

Setting up Docker

First things first, I’ll need an actual database server. My go to for this is Docker, as it’s easy to manage many instances, easy to scrap and start fresh, and easy to validate my provisioning works as expected.  With Docker installed, it’s a simple

docker run --restart=always -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=password -d postgres:alpine

Basic database configuration

Before getting into tables and functions, I’ll need a database instance on the server. Doing this in psql is my go to.

 CREATE DATABASE auth;

Then connect to it

 \c auth

I’m going to be encrypting passwords, so I’ll add the pgcrypto extension. I’m also going to be dealing with email addresses, and for sanity’s sake I’m going to treat the entire address as case insensitive. I know that’s not technically accurate, but it’s a usability nightmare otherwise. To do so, I’ll enable the citext extension.

CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "citext";

Both PostGraphql and PostgREST use schemas (or schemata if you’re that kind of person) to scope entities. There’s some good reading about it here. The simplest setup is to have one public schema, which will turn into the API, and one private schema, which will be kept “secret”.

CREATE SCHEMA auth_public; 
CREATE SCHEMA auth_private;

Following PostgREST some more, there are 3 main roles (when using row level security) – unauthenticated/anonymous, authenticated, and the role used by the actual framework itself (I’ve called it auth_postgraphql). The roles used by the framework should be able to access everything from both other roles.

CREATE ROLE auth_postgraphql LOGIN PASSWORD 'password'; 

CREATE ROLE auth_anonymous; 
GRANT auth_anonymous TO auth_postgraphql; 

CREATE ROLE auth_authenticated; 
GRANT auth_authenticated TO auth_postgraphql;

Schema design

Tables

Now the actual schema. For this seed project, I’m going to keep it about as minimal as possible while still allowing for authorization.

schema.png

Users have firstnames, last names, and unique IDs, privately they also have an email address (this is their username) and a password.

Creating these two tables in their respective schemas:

CREATE TABLE auth_public.user ( 
  id              serial primary key, 
  first_name      text not null check (char_length(first_name) < 80), 
  last_name       text check (char_length(last_name) < 80), 
  created_at      timestamp default now() 
);
CREATE TABLE auth_private.user_account ( 
  user_id         integer primary key references auth_public.user(id) on delete cascade, 
  email           citext not null unique, 
  password_hash   text not null 
);

Authorization

PostGraphQL makes authorization pretty straightforward by delegating it to the database. PostgreSQL has Row-Level Security (as of 9.5), which means a naive implementation of authorization is to restrict users by only letting them modify their own rows (where their id matches the id of the row).

Enable RLS on the user table:

ALTER TABLE auth_public.user ENABLE ROW LEVEL SECURITY;

And set policies so users can interact with their own rows. Everyone (unauthenticated included) will be able to query the table, but only authenticated users will be able to update or delete entries, and only their own.

CREATE POLICY select_user ON auth_public.user FOR SELECT
  using(true);

CREATE POLICY update_user ON auth_public.user FOR UPDATE TO auth_authenticated 
  using (id = current_setting('jwt.claims.user_id')::integer); 

CREATE POLICY delete_user ON auth_public.user FOR DELETE TO auth_authenticated 
  using (id = current_setting('jwt.claims.user_id')::integer);

JWT for authentication

Before going any further, I have enough information to be able to create the type I’ll be using for my JWT. Keeping this simple, it will have role for authentication and user_id for authorization.

CREATE TYPE auth_public.jwt as ( 
  role    text, 
  user_id integer 
);

Functions

I’ll create 3 functions:

  1. register a new user
  2. authenticate that user with a provided email and password
  3. show who the current user is
CREATE FUNCTION auth_public.register_user( 
  first_name  text, 
  last_name   text, 
  email       text, 
  password    text 
) RETURNS auth_public.user AS $$ 
DECLARE 
  new_user auth_public.user; 
BEGIN 
  INSERT INTO auth_public.user (first_name, last_name) values 
    (first_name, last_name) 
    returning * INTO new_user; 
    
  INSERT INTO auth_private.user_account (user_id, email, password_hash) values 
    (new_user.id, email, crypt(password, gen_salt('bf'))); 
    
  return new_user; 
END; 
$$ language plpgsql strict security definer;
CREATE FUNCTION auth_public.authenticate ( 
  email text, 
  password text 
) returns auth_public.jwt as $$ 
DECLARE 
  account auth_private.user_account; 
BEGIN 
  SELECT a.* INTO account 
  FROM auth_private.user_account as a 
  WHERE a.email = $1; 

  if account.password_hash = crypt(password, account.password_hash) then 
    return ('auth_authenticated', account.user_id)::auth_public.jwt; 
  else 
    return null; 
  end if; 
END; 
$$ language plpgsql strict security definer;
CREATE FUNCTION auth_public.current_user() RETURNS auth_public.user AS $$
SELECT *
FROM auth_public.user
WHERE id = current_setting('jwt.claims.user_id')::integer
$$ language sql stable;

Permissions

Everything I need for this seed project is defined now, so time to sort out the permissions for the various roles.

GRANT USAGE ON SCHEMA auth_public TO auth_anonymous, auth_authenticated; 
GRANT SELECT ON TABLE auth_public.user TO auth_anonymous, auth_authenticated; 
GRANT UPDATE, DELETE ON TABLE auth_public.user TO auth_authenticated; 
GRANT EXECUTE ON FUNCTION auth_public.authenticate(text, text) TO auth_anonymous, auth_authenticated; 
GRANT EXECUTE ON FUNCTION auth_public.register_user(text, text, text, text) TO auth_anonymous; 
GRANT EXECUTE ON FUNCTION auth_public.current_user() TO auth_anonymous, auth_authenticated;

Set up server

Create a regular ol’ Express server with dotenv to ensure our environment specific details don’t leak out.

$ yarn init
yarn init v0.24.6
question name (auth-server):
question version (1.0.0):
question description:
question entry point (index.js):
question repository url:
question author:
question license (MIT):
success Saved package.json
Done in 2.82s.
$ yarn add express dotenv
yarn add v0.24.6
info No lockfile found.
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
success Saved 43 new dependencies.
... (snip all the dependencies) ...
$ touch index.js .env

Now set up a barebones Express server, e.g.:

require('dotenv').config();
const express = require('express');

app.use(function (req, res, next) {
  var err = new Error('Not Found');
  err.status = 404;
  next(err);
});

app.use(function (err, req, res, next) {
  res.send('Error! ', err.message, ' ', (req.app.get('env') === 'development' ? err : {}));
});

app.listen(process.env.PORT);

Make sure you’ve added PORT to your .env file and set it appropriately – e.g. PORT=3000

Integrate PostGraphQL middleware

This part is arguably slightly more interesting, but still just using configuration to wire things together:

Interactive testing

Database level

Before getting any application level concerns involved, I like to test everything works on the database level. I’ll register a user (using the function), make sure it populates both the public and private table, intentionally fail authenticating against it, attempt to successfully athenticate against it, then finally clean up the user.

auth=# SELECT auth_public.register_user ('firstname', 'lastname', 'email', 'password');
                    register_user                    
-----------------------------------------------------
 (1,firstname,lastname,"2017-06-11 04:05:39.216743")
(1 row)
auth=# SELECT *
FROM auth_public.user
JOIN auth_private.user_account
  ON auth_public.user.id = auth_private.user_account.user_id
;
auth-# ;
 id | first_name | last_name |         created_at         | user_id | email |                        password_hash                         
----+------------+-----------+----------------------------+---------+-------+--------------------------------------------------------------
  1 | firstname  | lastname  | 2017-06-11 04:05:39.216743 |       1 | email | $2a$06$PZ9NUmYpgDjk8QJuDwah.OJSt/Quo53Qzkddc5ccOSYpuzYXdfYJO
(1 row)
auth=# SELECT auth_public.authenticate('email', 'wrong-password');
 authenticate 
--------------
 
(1 row)
auth=# SELECT auth_public.authenticate('email', 'password');
 authenticate 
--------------
 (user,1)
(1 row)
auth=# DELETE FROM auth_public.user;
DELETE 1

GraphiQL level

  1. Navigate to GraphiQL the port you’ve configured (3000 by default)
    – e.g. http://localhost:3000/graphiql

Create a user

  1. Register a user via GraphQL mutation
mutation {
  registerUser(input: {
    firstName: "Genghis"
    lastName: "Khan"
    email: "Genghis@khan.mn"
    password: "Genghis1162"
  }) {
    user {
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Observe the response
{
  "data": {
    "registerUser": {
      "user": {
        "id": 2,
        "firstName": "Genghis",
        "lastName": "Khan",
        "createdAt": "2017-06-11T06:17:39.084578"
      }
    }
  }
}

Observe authentication working

  1. Try authenticating with a GraphQL mutation
mutation {
  authenticate(input: {
    email: "Genghis@khan.mn"
    password: "Genghis1162"
  }) {
    jwt 
  }
}
  1. Observe the response
{
  "data": {
    "authenticate": {
      "jwt": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aF9hdXRoZW50aWNhdGVkIiwidXNlcl9pZCI6MiwiaWF0IjoxNDk3MTYyMTIyLCJleHAiOjE0OTcyNDg1MjIsImF1ZCI6InBvc3RncmFwaHFsIiwiaXNzIjoicG9zdGdyYXBocWwifQ.hLZ7p3vJs3UYW9IKB7u8tbXONUl_tZoWhiAAD1-OPQg"
    }
  }
}

Try making an unauthenticated request when authentication is necessary

  1. currentUser is protected, so query that
query {
  currentUser{
    id
    firstName
    lastName
    createdAt
  }
}
  1. Observe the not-particularly-friendly response
{
  "errors": [
    {
      "message": "unrecognized configuration parameter \"jwt.claims.user_id\"",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "path": [
        "currentUser"
      ]
    }
  ],
  "data": {
    "currentUser": null
  }
}

Try making an authenticated request when authentication is necessary

  1. You’ll need the ability to send your JWT to the server, which unfortunately isn’t possible with vanilla GraphiQL
  1. Set an authorization header by copy/pasting the value out of the `jwt` field in the `authenticate` response in step 5.
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aF9hdXRoZW50aWNhdGVkIiwidXNlcl9pZCI6MSwiaWF0IjoxNDk3MTYwNzA3LCJleHAiOjE0OTcyNDcxMDcsImF1ZCI6InBvc3RncmFwaHFsIiwiaXNzIjoicG9zdGdyYXBocWwifQ.aInZvEVhhDfi9yQDWRzvmSaE7Mk2PufbBrY3rxGlEt8
  • Don’t forget the Bearer on the right side of the header, otherwise you’ll likely see Authorization header is not of the correct bearer scheme format.
  1. Submit the query with the authorization header attached
query {
  currentUser{
    nodeId
    id
    firstName
    lastName
    createdAt
  }
}
  1. Observe your now successful response
{
  "data": {
    "currentUser": {
      "nodeId": "WyJ1c2VycyIsMl0=",
      "id": 2,
      "firstName": "Genghis",
      "lastName": "Khan",
      "createdAt": "2017-06-11T06:17:39.084578"
    }
  }
}

Observe authorization working

  1. With the authorization header set, try updating Genghis
mutation {
  updateUser(input: {
    nodeId: "WyJ1c2VycyIsMl0="
    userPatch: {
      lastName: "NotKhan"
    }
  }) {
    user {
      nodeId
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Observe that it works:
{
  "data": {
    "updateUser": {
      "user": {
        "nodeId": "WyJ1c2VycyIsMl0=",
        "id": 2,
        "firstName": "Ghengis",
        "lastName": "NotKhan",
        "createdAt": "2017-06-11T06:17:39.084578"
      }
    }
  }
}
  1. Add a friend
mutation {
  registerUser(input: {
    firstName: "Serena"
    lastName: "Williams"
    email: "Serena@Williams.ca"
    password: "NotGhengis"
  }) {
    user {
      nodeId
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Keeping Genghis’ JWT, try modifying your friend
  • Note this is Serena’s nodeId
mutation {
  updateUser(input: {
    nodeId: "WyJ1c2VycyIsM10="
    userPatch: {
      lastName: "KhanMaybe?"
    }
  }) {
    user {
      nodeId
      id
      firstName
      lastName
      createdAt
    }
  }
}
  1. Get rejected
{
  "errors": [
    {
      "message": "No values were updated in collection 'users' using key 'id' because no values were found.",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "path": [
        "updateUser"
      ]
    }
  ],
  "data": {
    "updateUser": null
  }
}

Conclusion

That’s it – the bulk of an application that has authentication and authorization already sorted. Missing a front end, but at least there’s GraphiQL?

Code is availabe here.

Delete non-empty folder over FTP with JavaScript

Xplornet, my ISP, offers 50 MB of free hosting. More interestingly, it’s a public hostname that I don’t have to worry about setting up or maintaining! While mucking around with JavaScript front end frameworks, I figured that’d be a neat place to drop my work. Right off the bat I was a little bit wary of the offering, as 50 MB is a weirdly tiny amount of space, which likely means Xplornet has no idea what they’re doing when it comes to web hosting. Upon signing up and receiving my concerningly short password in plaintext, which I also don’t seem to be able to change, I figured I was bang on with my suspicions. Nothing private going on that server, that’s for sure. Regardless, I figured I’d give it a try and see what I could do with it.

It looks like the only way to get files to the server is via FTP (of course no SFTP). Xplornet gave me an account with access to a less than ideal front end for loading up files, but that is all GUI driven and generally painful. I want to deploy my front end automatically, so time to figure out a little FTP.

FTP in JavaScript

I’m presently trying to develop my JavaScript skills, so I figured I’d start writing up a deployment script in JS. So far as I can tell, the FTP ecosystem is pretty sparse in the JavaScript world. I landed on jsftp as the library for backing the interaction, but it’s a bit rough around the edges. I figured I’d keep the script simple: delete everything in /public and replace it with the output of my build. I immediately tripped over FTP… I don’t really know anything about it, beyond using clients to move files around every now and then, but it appears as though deleting a non-empty file is difficult or maybe impossible?

For lack of rm -rf

When I can’t delete non-empty directories, the only other solution I can think of is to walk the directory and delete everything from the bottom up. So that’s what I decided to do! Note that this is literally the first iteration that managed to delete all the files in one shot, so hold off on the judgement… If I have spare time I might bundle it up into a NPM module, we’ll see how it goes.

And that managed to do it. The next piece will be dumping the build output on the server, and then I should be good to go to actually use those 50 MB…

NordVPN with OpenVPN on Raspberry Pi

Why bother?

Everyone wants your data all the time. Personal privacy is being eroded, as users are being tracked, traffic is being shaped, and an astonishing amount of “metadata” is being collected and correlated. In the midst of all the scary privacy news in the past few years, I figured it was becoming indefensible to be without a VPN. The price for most of the products in the market is extremely reasonable and without even worrying about nation states, it keeps a significant portion of my browsing information out of the hands of my ISP. Whether the ISP is looking at traffic for traffic shaping concerns, selling “anonymized” data, or policing content infringement, I can’t imagine there being a single upside to exposing my data. With that said, I signed up for NordVPN (referral link). It was well reviewed, and a reasonable price – 3$/month on a 2 year subscription.

Raspbian

The Raspberry Pi runs Raspbian a version of Debian (which is also what Ubuntu is based off). I find this extremely handy, because it means there’s a wealth of information available. Unfortunately, I was unable to find precisely the guide I was looking for, hence this. Debian (and therefore Raspbian) uses systemd to manage its services, which is ultimately where this is headed.

Set up

There are a couple pretty straightforward pieces here:

  1. Install OpenVPN
  2. Set up NordVPN
  3. Set up authentication with NordVPN
  4. Make it work
  5. Try it out

0. What’s your IP address right now?

How are we going to know if this worked? We’ll want to validate that our public IP address has changed. Note that this is different from your private LAN IP, which usually looks something like 192.168.1.23. I think one of the easiest ways to check the computer’s current public IP is to do something like (obviously executed on the Pi itself):

$ curl ipinfo.io/ip
37.48.80.202

Write this down somewhere, and we’ll compare later.

1. Install OpenVPN

This one is super easy:

$ sudo apt install openvpn

2. Set up NordVPN

Almost as easy. You can look at NordVPN’s instructions here, but this really pollutes your /etc/openvpn folder, which I’ve found to be an annoyance. I made a folder to store them.

$ cd /etc/openvpn
$ sudo mkdir nordvpn
$ cd nordvpn
$ sudo wget https://nordvpn.com/api/files/zip --2017-05-25 03:37:32-- https://nordvpn.com/api/files/zip Resolving nordvpn.com (nordvpn.com)... 104.20.17.34, 104.20.16.34 Connecting to nordvpn.com (nordvpn.com)|104.20.17.34|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 4113709 (3.9M) [application/octet-stream] Saving to: ‘/etc/openvpn/nordvpn/zip’ /etc/openvpn/nordvpn/zip 100%[=======================================>] 3.92M 53.5KB/s in 56s 2017-05-25 03:38:31 (71.8 KB/s) - ‘/etc/openvpn/nordvpn/zip’ saved [4113709/4113709] $ sudo unzip -q zip

At this point your zip /etc/openvpn/nordvpn folder should be chock full of (~2048?) ovpn files for the various NordVPN servers. Time to choose one! Which one is totally dependent on your goals – latency, speed, privacy, security etc. Picking one arbitrarily, copy it over:

$ cd /etc/openvpn
$ sudo cp nordvpn/sk2.nordvpn.com.tcp443.ovpn .
$ ls
sk2.nordvpn.com.tcp443.ovpn nordvpn update-resolv-conf

As a checkpoint, to make sure everything is working so far, you can starting the VPN client up (you’ll need your NordVPN credentials here). Try running:

$ sudo openvpn sk2.nordvpn.com.tcp443.ovpn
Thu May 25 03:55:37 2017 OpenVPN 2.3.4 arm-unknown-linux-gnueabihf [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Jan 23 2016
Thu May 25 03:55:37 2017 library versions: OpenSSL 1.0.1t 3 May 2016, LZO 2.08
Enter Auth Username: *********************
Enter Auth Password: ********************
Thu May 25 03:56:23 2017 WARNING: --ping should normally be used with --ping-restart or --ping-exit
Thu May 25 03:56:23 2017 NOTE: --fast-io is disabled since we are not using UDP
... <a bunch of logging messages> ...
Thu May 25 03:56:30 2017 Initialization Sequence Completed

It should be self explanatory, but if you see:

Thu May 25 03:57:38 2017 AUTH: Received control message: AUTH_FAILED
Thu May 25 03:57:38 2017 SIGTERM[soft,auth-failure] received, process exitin

You’ve presumably made a mistake with your credentials, or your account isn’t active.

3. Set up your NordVPN authentication

Obviously it sucks a little to have to type in your username and password every time you want to start your VPN connection. If the server is private it’s nice to bake the authentication credentials right in. Disclaimer: there’s probably something objectionable about this, feel free to comment if there’s a better way. You can use your favorite editor here, so long as it ends up the same:

$ sudo nano .secrets

secrets

This is the format – username followed by a newline followed by password. If you haven’t used nano before, hit Ctrl + x to exit, then y to confirm you want to keep your changes, then finally Enter to actually exit.

Now open up your configuration file: sudo nano sk2.nordvpn.com.tcp443.ovpn

And find the line that says auth-user-pass.  Append the absolute path of the .secrets file you just created to this line. It’ll end up looking something like: auth-user-pass /etc/openvpn/.secrets

Then save and exit. This makes it so OpenVPN automatically looks in .secrets when it goes to authenticate with the NordVPN server.

4. Make it work: .ovpn != .conf

This one is extremely subtle if it’s you’re not really sure what you’re doing – which is likely if you’re reading this. OpenVPN automatically sets up a daemon for every .conf file it finds in /etc/openvpn –  note that I have said .conf. We have .ovpn files. The last step here is to “convert” the file. All that means in this context is renaming it…

$ sudo mv sk2.nordvpn.com.tcp443.ovpn sk2.nordvpn.com.tcp443.conf

And you should be good to go!

5. Try it out

Hopefully everything has come together now. I think the most convincing way to try this out is with a good ol’ sudo reboot, wait for the unit to come back up, followed by $ curl ipinfo.io/ip – you should now get a different IP address from what you had in step 0.

$ curl ipinfo.io/ip
209.171.60.102