While I am a firm believer of a 410 over a 404 error response, this is highly dependent upon Google actually visiting each page one at a time. If your site does not enjoy frequent visitations from Googlebot as a result of being considered a highly fresh and trendy site, then this would mean that it could take quite some time for Google to find each page before removing them.
When a site is hacked, it is often quite impossible that each URL be removed using the Remove URL option in Google Search Console though this remains an option with limitations of course. More on this later.
One potentially faster option is to use the robots.txt file.
Google will visit the robots.txt each time it visits your site providing that it has not fetched a fresh copy of the robots.txt file within 24 hours. This is seen as a reasonable compromise to fetching the robots.txt file each time Google visits or fetching the robots.txt too infrequently. Prior, there was no standard for this and there were always detractors for either reading the robots.txt file too frequently or not frequently enough. Yes. Sometimes Google cannot win.
When the robots.txt is fetched, it is saved within the index and applied as Googlebot goes about it’s business. However, there is also a process that applies regular expressions (regex) rules easily derived from the rules found within the robots.txt and removes URLs and pages found within the index. This is not done immediately, likely to avoid short-term mistakes made by the webmaster, however, because robots.txt is taken very seriously as pivotal rules mechanism for well behaved robots, Google will apply it fairly quickly. It may still take days or weeks, however, it is done in bulk.
For this reason, the robots.txt is often the fastest way to remove URLs providing that they can be specified by a pattern. While not every search engine treats the robots.txt directives equally, fortunately, Google does allow wildcards giving you a serious advantage.
According to page: https://support.google.com/webmasters/answer/6062596?hl=en&ref_topic=6061961 under Pattern-matching rules to streamline your robots.txt code, you will see a similar example.
Google does not guarantee that the URLs will be removed and states that it will take some time to remove the URLs.
However, it has been my experience that this method works and works faster than waiting for Google to fetch each page one at a time.
One warning. If you do block Google from fetching these pages via the robots.txt file, Google will not see a 404 or 410 error for the page. You have to choose one method or another. Google does recommend using the Google Search Console to remove URLs.
I prefer to wait for Google to remove pages naturally using a 404. A 410 error is faster since each 404 is retested several times before removing. However, given that your site has been hacked and these pages remain within the search results, it may be wise to attempt to remove the pages using another method. I have personally removed pages in bulk using this method though it was a couple of years ago. Which one you use is up to you.