I was dealing with an aggressive robot / robot last week.
The bot is distributed with random ips and a chain of agents, so hard to block, but I have another thread for it. The problem is that a flood of http requests can cause Tomcat to die.
The Tomcat process is still correct and is not running out of memory, it just stops taking http or https requests. It will expire any http request, but will always accept https requests (if http is attacked, sometimes https also dies).
I have already seen an error with "too many open files", so I have changed the file limit from 10,000 to 50,000, which seems to help, at least the death of https, but http dies always. I don't see the "too many open files" recently.
It looks like an extreme amount of open files, why would Tomcat open so many files, could there be a file leak under high load? The server is sometimes OK for 6 months (so there can be no leakage under normal conditions), but it is already dead under high load.
The website is a large site with> 1 million pages (dynamic content) and> 1 million visits per day.
What happens when Tomcat receives a stream of http requests (like> 100 per second for an extended period of time), I assume that the requests will start to be saved. If you are using a thread pool, there will be no more threads, will it continue to aggregate requests until something breaks, or will it start to reject requests?
Is there a way to start rejecting requests after a certain amount of backup? Seems to be the only way to avoid death or accident under extreme load.
My http and https configuration is different, so it may be related to the reason why http dies. https uses maxThreads when http is not, (how many threads is the default?)
Using Tomcat v8.5.47, CentOS 7.6, Oracle Java 1.8