In your test... You make 500 requests in parallel. And if you send all of them to single container it gets processed faster than if you split them and send in parallel 50 to each of the 10 containers?
I am not so surprised. As I already mentioned I don't see the point currently in having multiple containers running on single host.
You argue back:
Also, the containers are in its own world. Isolated from one another. Why would a process in one container interfere with another one on the same host?
This is not true. By default containers fight for the same cpu or memory resources.
In your test with 1 container... The single container creates quite likely 500 chrome instances and anyway process all requests in parallel.
In your test with 10 containers... All of them creates 50 chrome instances
You see in both cases there is the same parallelization. What makes the difference is the number of hosts, not the number of containers because they fight for the same resources.