JWT - What You Should Know

Published Oct 29, 2019

This is not an introduction to JSON Web Tokens. There are plenty of those on the internet already. I’m trying to outline and compile my thoughts and research about when to implement JWT and how to do it safely.

TLDR: Its easy to shoot yourself in the foot with JWT, but its very widely used and has some clear benefits. Be sure to research before trying and use a solid library.

If you want a good intro check out this post and this post.

First of all, why are JWT or JSON Web Token useful and so widely used.

Is there any reason NOT to use JWT.

JWTs are interesting in that they are widely used and also widely criticized. Most criticisms of JWT fall into 2 categories:

I’m going to try to give my best shot at addressing some of these issues.


First you should decide if your need/want the benefits of JWT.

Because JWT has so many moving parts (less than other forms of authentication but more than others), in my opinion, you should only do it if you need some of the unique benefits of JWT.

I would say the following 2 things describe you then JWT might be worth implementing:

If you are willing to take the time to understand JWT, then there are some resources to get you started.

Library/Implementation Vulnerabilities

It seems like this is an easy one to address. Just use a good library and good implementation. lol.


But seriously JWT as a concept is supported strongly by the company Auth.io and they curate a list of approved JWT libraries for many different libraries and approved hashing algorithms.

That list of approved libraries can be found here.

Since The other important part of this concern is keeping the library up to date.However this is an issue with all opensource projects.

If security is a primary concern, then you should already have a strategy to keep notified of the vulnerabilities in the libraries you are using.

If you work with Node.js then the npm audit command should be an important part of your deployment workflow. Of course its limited to known and reported library vulnerabilities, but its much better than doing nothing.

More info on npm audit and npm security advisories.

If you’re working in a language other than Node.js then you’ll need to have your own strategy of staying appraised of library vulnerabilities.


Good implementation comes from understanding. I think its safe to say that blindly copy/pasting solutions from stack overflow really increases your likelihood of having a poor implementation.

This is authentication we’re talking about. And given how catastrophic the consequences can be, make sure you educate yourself and tread carefully when building your implementation.

Avoid blind copy/pasta unless you understand what its doing. Also personally I like to avoid using big black box libraries and build the minimum functional system.

Here are some resources to help:

Don’t skip the reading, but some of the main things to do are:

Enforce Approved Algorithms

Have a short list of allowed (approved) algorithms, and ensure that your token verify function checks that the algorithm shown in the header is one of the approved algorithms.

Do some research to select the approved algorithms, but the most common seem to be:

Handle Asymmetric and Symmetric Algorithm Tokens Separately

Read about this more in the best practices article above, but this helps mitigate an attack where the public key of an asymmetric key (which is often easier for an attacker to get their hands on) can be used as a private key of a symmetrically signed token.

Strong Keys

Make sure your secret key is long enough. The rule of thumb is to make it as long as the hash output. So for a SHA256 output algorithm like HS256, the secret key should be at least 256 characters long.

Validate Nested Tokens

If using nested tokens, for example to hide sensitive information in the token body, be sure to validate all the way down.

Limit Token Utility

Avoid 1 token fits all if possible. When issuing a token make it clear in the claims what it should be used for and when validating the token make sure the claims match the requested use.

Validate Token Content

Be sure to validate the content of the token as well as the validity of the token. Use claims like aud, typ, iss, and sub and validate each individual claim after token validation and decoding to ensure that the right token is being used for the right thing.

Stateless Auth Downsides

Here is where some personal preference comes in. One of the biggest downsides to stateless authentication is that there is no way of invalidating the tokens once they are issued. This is a potential security issue as well as an inconvenience.

But I think this can be solved by using a stateful refresh token.

This does add more moving parts to the authentication machine, but it solves the important issue of revoking token access.

There are many ways to implement this. For me, at this time, I prefer this:

Alternatives to JWT

Create a New Linux User With SSH Access

Published Oct 11, 2019

Generate the key for the user. This should be done on the local machine:

ssh-keygen -t rsa -b 4096 -o -a 100

Create a new user on the remote machine. The -m adds a deuser fault user home directory.

useradd -m new-user

Switch to the new user:

sudo su new-user

Make a .ssh directory for the user:

mkdir ~/.ssh

Make an authorized_keys file.:

touch ~/.ssh/authorized_keys

Set the right permissions:

chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

Add the public key you created to the authorized_keys file.


Quick Docker Server Setup with API on Ubuntu

Published Oct 5, 2019

This is mostly just a quick reference for me to use when I need to whip up a docker server. This is on Ubuntu 16.04.

Install Docker

First prepare the app registry:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update

Then check the output of this command. It should have no docker installed but there should be a candidate:

apt-cache policy docker-ce

# docker-ce:
#   Installed: (none)
#   Candidate: 5:18.09.0~3-0~ubuntu-xenial
#   Version table:
#      5:18.09.0~3-0~ubuntu-xenial 500
#   ...

Then install docker:

sudo apt-get install -y docker-ce

Then check the output of this command. Docker should be loaded and active:

sudo systemctl status docker

# ● docker.service - Docker Application Container Engine
#    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
#    Active: active (running) since Fri 2019-01-04 22:45:48 UTC; 1min 22s ago
#      Docs: https://docs.docker.com
#  Main PID: 3538 (dockerd)
#    CGroup: /system.slice/docker.service
#            └─3538 /usr/bin/dockerd -H unix://
# .... Logs down here should say something like Started Docker Application Container Engine. at some point

Add your current user to the docker user group so it can access the docker socket

sudo usermod -a -G docker $USER

Configure Docker API

Here is where we tell docker to listen to incoming API requests.


Before doing this step you should make sure you have a proper firewall implemented. There is no out of the box authentication for the docker API, and many hackers know docker. Once your docker instance is listening for requests, unless your docker port is protected by a firewall or something else, hackers WILL start running random containers on your docker server.

Now make a new file called /etc/systemd/system/docker.service.d/docker.conf and open it for editing:

sudo mkdir /etc/systemd/system/docker.service.d
sudo touch /etc/systemd/system/docker.service.d/docker.conf
sudo vim /etc/systemd/system/docker.service.d/docker.conf

Add this to the file:

ExecStart=/usr/bin/dockerd -H tcp:// -H unix:///var/run/docker.sock

You can change the PORT your API listens on here if you want to.

Restart docker service:

sudo systemctl daemon-reload
sudo systemctl restart docker.service

Check output of this command. It should have -H tcp:// in the docker command.

sudo systemctl status docker.service

# ● docker.service - Docker Application Container Engine
#    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
#   Drop-In: /etc/systemd/system/docker.service.d
#            └─docker.conf
#    Active: active (running) since Fri 2019-01-04 22:54:49 UTC; 3s ago
#      Docs: https://docs.docker.com
#  Main PID: 3791 (dockerd)
#     Tasks: 8
#    Memory: 30.0M
#       CPU: 178ms
#    CGroup: /system.slice/docker.service
#            └─3791 /usr/bin/dockerd -H tcp:// -H unix:///var/run/docker.sock
# .... Logs down here should say something like Started Docker Application Container Engine. at some point          

Now we can do a quick test. This command should return an empty array since we don’t have any images on our fresh docker host:

curl -X GET http://localhost:2375/images/json
# []

You can also test it from another computer with this command:

DOCKER_HOST=tcp://<DOCKER_HOST_IP_GOES_HERE>:2375 docker ps -a

Now we should be done.

Simple Object Storage in Redis (Node.js)

Published Oct 5, 2018

Redis is a simple key value store and is highly optimized for fast reads and writes. I found myself in a situation where I wanted to offload some app task logging from our document store (mongoDB) to redis.

There are a few important things to consider when making this kind of change.

Multi Tenancy with Express/Mongoose

Published Oct 3, 2018

Multi tenant apps are apps where multiple users share the same database but their data is isolated from one another. This can basically describe almost any app with multiple users. For example users can often only see and change their own data.

However, personally, I define multi tenant apps as having a layer of data isolation above the level of the user. For example you could have a data model called an organization and the the user can only see and interact with the data related to that organization.

Serverless Framework S3 Permissions (Serverless IAM Permissions)

Published Sep 13, 2018

This really isn’t a complicated problem, but I want to document this for later.

Its hard to find a good title for this. Usually you will never us a lambda function to upload to S3. For user submitted files, the right way to upload to S3 is generate a temporary signed upload URL and the user will submit directly to S3 without sending the file to the serverless function.

Executable Binary Files with Serverless Framework and Webpack - AWS Lambda

Published Sep 1, 2018

Many web apps rely on executable binaries to function. For example if you want to do any kind of image processing, usually, in addition to the actual libraries you are using usually you need an program like `imagemagick ` to make it actually work.

So, if you ever want to build a sophisticated web app with the serverless framework, you need to be able to upload and use executable binaries. And its best if you can upload them in such a way that the library knows how to find them without any extra configuration.

Robust Serverless API Boilerplate with ES6, Folder Structure, Testing (Mocha + Chai), and ESLint

Published Aug 25, 2018

As a Rails developer turned Javascript Zealot, I sometimes miss the structure and opinions of the Ruby on Rails world. Its amazing how bare bones many javascript libraries are. They are so modular and self contained (good things) that, unless you take the time to add some structure and organization to your code, its easy for your project to feel chaotic an unorganized. So I’m always looking for and trying to find good patterns and structure to follow with my javascript projects.

The serverless framework is a good example of this. Its so minimal in its setup that it may be difficult to know where to start to give it some structure. So here I’ll share with you one possible way to structure a serverless API project.

Access Token Handling (Automatic Refresh) with React + Redux

Published Aug 23, 2018

The industry trend of decoupling backends and frontends has lots of advantages. You could argue that its just good software design. Plus it makes it much easier to have multiple front-end clients using the same backend. And since mobile apps dont use cookies, then it makes sense to convert the entire authentication system to some kind of token based solution.

But the next questions is how can you safely and convienently store and manage these tokens in your React+Redux app.

Node Env Variables - dotenv Workaround

Published Aug 22, 2018

The library that everyone uses to manage environmental variables in node is dotenv. I don’t think I’ve ever had so much trouble with such a popular module.

What I want is to have my development environment run with the one set of environment variables and my tests run with a different set of environment variables.