The workers.numberOfWorkers config limits the parallelism. Every worker can process only one parallel request.
If you send 120 requests at once, the requests that won't have any free workers will wait in the queue and eventually fail on Error: Timeout when waiting for worker.

Increasing workers.numberOfWorkers should help but if you increase it too much, you will reach the server limits. A good value for the start can be for example 2x the number of CPUs.

You mentioned you have a scaling limit of 15 instances. However this typically doesn't have immediate effect. If you send 120 requests, it will reach just one instance and overload it resulting in the mentioned timeout.