This is not an introduction to JSON Web Tokens. There are plenty of those on the internet already. I’m trying to outline and compile my thoughts and research about when to implement JWT and how to do it safely.
TLDR: Its easy to shoot yourself in the foot with JWT, but its very widely used and has some clear benefits. Be sure to research before trying and use a solid library.
First of all, why are JWT or JSON Web Token useful and so widely used.
- Stateless - They don’t require storing sessions in a database.
- Portable - They can easily be used with multiple different backends and services.
- No cookies - They can easily be used as bearer tokens by multiple different kinds of clients (mobile, browser, etc…)
- Verbose - They include useful information in the token itself like user roles and other claims about the user.
Is there any reason NOT to use JWT.
JWTs are interesting in that they are widely used and also widely criticized. Most criticisms of JWT fall into 2 categories:
- Overkill - There are a lot of moving parts with JWT. Its probably overkill for many useless.
- Library/Implementation Vulnerabilities - Criticizing vulnerabilities in particular JWT libraries or implementations.
- Stateless Auth Downsides - Generally criticizing the practice of using any “stateless” client tokens. Because there’s no great way to revoke them early while remaining stateless, etc.
I’m going to try to give my best shot at addressing some of these issues.
First you should decide if your need/want the benefits of JWT.
Because JWT has so many moving parts (less than other forms of authentication but more than others), in my opinion, you should only do it if you need some of the unique benefits of JWT.
I would say the following 2 things describe you then JWT might be worth implementing:
- I have multiple services/backends share authentication.
- I have non-browser clients that use my service.
- I am willing to take the time to understand JWT before implementing it.
- I am willing to keep my implementation up to date.
If you are willing to take the time to understand JWT, then there are some resources to get you started.
- Intro to JWT - https://jwt.io/introduction/
- The Complete Guide to JSON Web Tokens - https://blog.angular-university.io/angular-jwt/
It seems like this is an easy one to address. Just use a good library and good implementation. lol.
But seriously JWT as a concept is supported strongly by the company Auth.io and they curate a list of approved JWT libraries for many different libraries and approved hashing algorithms.
Since The other important part of this concern is keeping the library up to date.However this is an issue with all opensource projects.
If security is a primary concern, then you should already have a strategy to keep notified of the vulnerabilities in the libraries you are using.
If you work with Node.js then the
npm audit command should be an important part of your deployment workflow. Of course its limited to known and reported library vulnerabilities, but its much better than doing nothing.
If you’re working in a language other than Node.js then you’ll need to have your own strategy of staying appraised of library vulnerabilities.
Good implementation comes from understanding. I think its safe to say that blindly copy/pasting solutions from stack overflow really increases your likelihood of having a poor implementation.
This is authentication we’re talking about. And given how catastrophic the consequences can be, make sure you educate yourself and tread carefully when building your implementation.
Avoid blind copy/pasta unless you understand what its doing. Also personally I like to avoid using big black box libraries and build the minimum functional system.
Here are some resources to help:
- JWT Best Current Practices - https://auth0.com/blog/a-look-at-the-latest-draft-for-jwt-bcp/
- Critical vulnerabilities in JSON Web Token libraries - https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/
Don’t skip the reading, but some of the main things to do are:
Enforce Approved Algorithms
Have a short list of allowed (approved) algorithms, and ensure that your token verify function checks that the algorithm shown in the header is one of the approved algorithms.
Do some research to select the approved algorithms, but the most common seem to be:
- HMAC + SHA256
- RSASSA-PKCS1-v1_5 + SHA256
- ECDSA + P-256 + SHA256
Handle Asymmetric and Symmetric Algorithm Tokens Separately
Read about this more in the best practices article above, but this helps mitigate an attack where the public key of an asymmetric key (which is often easier for an attacker to get their hands on) can be used as a private key of a symmetrically signed token.
Make sure your secret key is long enough. The rule of thumb is to make it as long as the hash output. So for a SHA256 output algorithm like HS256, the secret key should be at least 256 characters long.
Validate Nested Tokens
If using nested tokens, for example to hide sensitive information in the token body, be sure to validate all the way down.
Limit Token Utility
Avoid 1 token fits all if possible. When issuing a token make it clear in the claims what it should be used for and when validating the token make sure the claims match the requested use.
Validate Token Content
Be sure to validate the content of the token as well as the validity of the token. Use claims like
sub and validate each individual claim after token validation and decoding to ensure that the right token is being used for the right thing.
Stateless Auth Downsides
Here is where some personal preference comes in. One of the biggest downsides to stateless authentication is that there is no way of invalidating the tokens once they are issued. This is a potential security issue as well as an inconvenience.
But I think this can be solved by using a stateful refresh token.
This does add more moving parts to the authentication machine, but it solves the important issue of revoking token access.
There are many ways to implement this. For me, at this time, I prefer this:
- Refresh tokens are stored in the database of an authentication service. One user can have many refresh tokens.
- Refresh tokens can only be issued with a full authentication process like sending a valid password, single sign on flow, etc….
- The response to a successful strong authentication includes a JWT (that includes the expiration time inside the token) and a refresh token.
- The client is responsible for keeping their JWT up to date by monitoring the expiration time and refreshing using the refresh token.
Alternatives to JWT
- PASETO - https://paseto.io
- Main stateless alternative to JWT it has restrictions on algorithms and other things to help prevent developers from making common mistakes.
- The main drawback is that its not very popular.
Generate the key for the user. This should be done on the local machine:
ssh-keygen -t rsa -b 4096 -o -a 100
Create a new user on the remote machine. The
-m adds a deuser fault user home directory.
useradd -m new-user
Switch to the new user:
sudo su new-user
.ssh directory for the user:
Set the right permissions:
chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys
Add the public key you created to the
This is mostly just a quick reference for me to use when I need to whip up a docker server. This is on Ubuntu 16.04.
First prepare the app registry:
1 2 3
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update
Then check the output of this command. It should have no docker installed but there should be a candidate:
1 2 3 4 5 6 7 8
apt-cache policy docker-ce # docker-ce: # Installed: (none) # Candidate: 5:18.09.0~3-0~ubuntu-xenial # Version table: # 5:18.09.0~3-0~ubuntu-xenial 500 # ...
Then install docker:
sudo apt-get install -y docker-ce
Then check the output of this command. Docker should be loaded and active:
1 2 3 4 5 6 7 8 9 10 11
sudo systemctl status docker # ● docker.service - Docker Application Container Engine # Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) # Active: active (running) since Fri 2019-01-04 22:45:48 UTC; 1min 22s ago # Docs: https://docs.docker.com # Main PID: 3538 (dockerd) # CGroup: /system.slice/docker.service # └─3538 /usr/bin/dockerd -H unix:// # # .... Logs down here should say something like Started Docker Application Container Engine. at some point
Add your current user to the docker user group so it can access the docker socket
sudo usermod -a -G docker $USER
Configure Docker API
Here is where we tell docker to listen to incoming API requests.
Before doing this step you should make sure you have a proper firewall implemented. There is no out of the box authentication for the docker API, and many hackers know docker. Once your docker instance is listening for requests, unless your docker port is protected by a firewall or something else, hackers WILL start running random containers on your docker server.
Now make a new file called
/etc/systemd/system/docker.service.d/docker.conf and open it for editing:
1 2 3
sudo mkdir /etc/systemd/system/docker.service.d sudo touch /etc/systemd/system/docker.service.d/docker.conf sudo vim /etc/systemd/system/docker.service.d/docker.conf
Add this to the file:
1 2 3
[Service] ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
You can change the PORT your API listens on here if you want to.
Restart docker service:
sudo systemctl daemon-reload sudo systemctl restart docker.service
Check output of this command. It should have
-H tcp://0.0.0.0:2375 in the docker command.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
sudo systemctl status docker.service # ● docker.service - Docker Application Container Engine # Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) # Drop-In: /etc/systemd/system/docker.service.d # └─docker.conf # Active: active (running) since Fri 2019-01-04 22:54:49 UTC; 3s ago # Docs: https://docs.docker.com # Main PID: 3791 (dockerd) # Tasks: 8 # Memory: 30.0M # CPU: 178ms # CGroup: /system.slice/docker.service # └─3791 /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock # # .... Logs down here should say something like Started Docker Application Container Engine. at some point
Now we can do a quick test. This command should return an empty array since we don’t have any images on our fresh docker host:
curl -X GET http://localhost:2375/images/json # 
You can also test it from another computer with this command:
DOCKER_HOST=tcp://<DOCKER_HOST_IP_GOES_HERE>:2375 docker ps -a
Now we should be done.
Redis is a simple key value store and is highly optimized for fast reads and writes. I found myself in a situation where I wanted to offload some app task logging from our document store (mongoDB) to redis.
There are a few important things to consider when making this kind of change.
Multi tenant apps are apps where multiple users share the same database but their data is isolated from one another. This can basically describe almost any app with multiple users. For example users can often only see and change their own data.
However, personally, I define multi tenant apps as having a layer of data isolation above the level of the user. For example you could have a data model called an
organization and the the user can only see and interact with the data related to that organization.
This really isn’t a complicated problem, but I want to document this for later.
Its hard to find a good title for this. Usually you will never us a lambda function to upload to S3. For user submitted files, the right way to upload to S3 is generate a temporary signed upload URL and the user will submit directly to S3 without sending the file to the serverless function.
Many web apps rely on executable binaries to function. For example if you want to do any kind of image processing, usually, in addition to the actual libraries you are using usually you need an program like `imagemagick ` to make it actually work.
So, if you ever want to build a sophisticated web app with the serverless framework, you need to be able to upload and use executable binaries. And its best if you can upload them in such a way that the library knows how to find them without any extra configuration.
The serverless framework is a good example of this. Its so minimal in its setup that it may be difficult to know where to start to give it some structure. So here I’ll share with you one possible way to structure a serverless API project.
But the next questions is how can you safely and convienently store and manage these tokens in your React+Redux app.
The library that everyone uses to manage environmental variables in node is dotenv. I don’t think I’ve ever had so much trouble with such a popular module.
What I want is to have my development environment run with the one set of environment variables and my tests run with a different set of environment variables.