Architecture for downloading large files from multiple endpoints to cloud storage

I am working on a desktop application that offers download on cloud storage. Storage providers have an easy way to download files. You receive accessKeyId and secretAccessKey and you are ready to download. I'm trying to find the optimal way to download files.

Option 1. Package each instance of the application with access keys. In this way, the files can be uploaded directly to the cloud without any intermediary. Unfortunately, I can not run any logic before uploading to the cloud. For example .. if each user has 5 GB of available storage, I can not check this constraint directly from the storage provider. I could send a request to my own server before downloading to perform a check, but as the keys are hard-coded in the application and I'm sure it's an easy feat.

Option 2. Send each downloaded file to a server, where the constraint logic can be executed, and then transfer the file to the final cloud storage. This approach suffers from a bottleneck at the server level. For example, if 100 users start downloading (or downloading) a 1 GB file and the server has a bandwidth speed of 1000 MB / s, each user downloads at only 10 MB / s = 1.25 MB / s .

Option 2 seems to be the way to go, because I control those who can download. I am looking for tips to minimize the bottlenecks of bandwidth. Which approach is recommended to handle the simultaneous upload of large files to cloud storage? I'm thinking of deploying many instances of low processor and memory capacity and using streaming, instead of buffering the entire file first and then sending it.