Performance degrades as we scale up containers
-
When we scale docker containers, we are seeing degradation in performance.
Containers avg_response_time error%
1 15 secs 0%
2 25 secs 0%
5 35 secs 10%
10 35 secs 14%I increased the timeouts to 90s for the template engine, script, and chrome. Following is my test scenario.
500 users
120 secs ramp up
1 iteration
Response PDF- 30KBI am using jmeter for running the tests.
Not sure if this is related to the issue I posted earlier
https://forum.jsreport.net/topic/732/error-when-scaling-docker-containers/8
-
Hm, I am not sure if scaling using multiple containers on the same VM is such good idea.
jsreport itself spawns extra processes for chrome or templating engines and scripts evaluation.
There is nothing heavy running in the main process anyway.
-
We have 4 docker hosts. If we can only have 4 js report containers, it defeats the purpose of using Docker. Also, the containers are in its own world. Isolated from one another. Why would a process in one container interfere with another one on the same host?
Note: The hosts are sufficiently provisioned to handle larger loads.
-
We have 4 docker hosts
please clarify this, a docker host means one standalone server or machine, so do you have 1 container running per each of the 4 servers (4 containers in total)? or do you have 4 containers running in just one server?
Why would a process in one container interfere with another one on the same host?
according to previous post, you are using fs-store for all your containers, so probably the slowdown you see is because all of these processes try to read the same directory, maybe the locks that fs-store use is the problem for the slowdown, each process try to get a lock and the other tasks get delayed. maybe you will get better performance by using another store based in a db, you can find them here and use what you want. you should try one db based store and test if it changes you performance test.
-
As we are evaluating now, I just have one host (1 VM) now. All the tests with 1,2,5,10 containers were all from the same docker host (Single VM). However, in production, we will either have 4 or 5 hosts. Any reason, why the file needs to be locked for reading? I will definitely try your suggestion about trying a different store.
Thanks for your help.
-
Any reason, why the file needs to be locked for reading?
It is not locked for reading, but we store some last logs of the requests and display it in studio dashboard.
You can try to disable studio extension which store these logs. There should be no locks afterwards.{ "extensions": { "studio": { "enabled": false } } }
-
Nice. I will try that.
-
Sorry :(. Disabling the studio doesn't make it better. I can see that I can't access the studio anymore. So, the setting is definitely in place. I will try different stores to see if it makes any difference.
-
Configured to use Postgres store. Still the same issue. Have you done similar tests? If so, would it be possible for you to share the test scenario? This way, i can see if I have done anything wrong with the tests that i have.
-
In your test... You make 500 requests in parallel. And if you send all of them to single container it gets processed faster than if you split them and send in parallel 50 to each of the 10 containers?
I am not so surprised. As I already mentioned I don't see the point currently in having multiple containers running on single host.
You argue back:
Also, the containers are in its own world. Isolated from one another. Why would a process in one container interfere with another one on the same host?
This is not true. By default containers fight for the same cpu or memory resources.
In your test with 1 container... The single container creates quite likely 500 chrome instances and anyway process all requests in parallel.
In your test with 10 containers... All of them creates 50 chrome instancesYou see in both cases there is the same parallelization. What makes the difference is the number of hosts, not the number of containers because they fight for the same resources.