Consider an API that requires a JWT for authorisation. For each JWT presented, it has to validate the signature, including base64 decoding, hashing and asymmetric encryption, which is stated noted as being more computationally expensive than symmetric encryption.
Consider if an attacker creates thousands of unique JWTs. The claims in the JWT are different, but as far as the API knows, could well be valid (e.g. the subject looks like it could be valid), but the signature is random and will not validate.
The API end point needs to validate each JWT individually before being able to discard it. If the process is computationally expensive enough, it might run out of resources, denying service to legitimate users.
I recognised that standard DoS mitigations can help – IP blocking, geolocation blocking. However if the attack is cheap enough, then a relatively small botnet, rotating through IP addresses, sending a relatively small amount of traffic could be hard to mitigate with existing DoS-prevention.
I looked for benchmarks for JWT validations. The best I could find is here, which indicates figures in the order of 300,000 to 130,000 nanoseconds per operation. This equates to between 3,000 and 7,000 validations (operations) per second. Even if out by a factor of 10, sending 100,000 requests per second doesn’t seem too much of a task for botnets. This could be quite cheap to rent.
Question: Is this a valid concern? Do JWTs or other signature-based authorisations mean DoS is easier to execute and harder to mitigate?