Background
A few (ok many) years ago I was working with a customer who was launching a new application and was expecting a high load on their launch, which for whatever reason was at 3pm on a Friday (1). As 3pm hit, and the load balancers started directing traffic to the now production system, there was a panic as synthetic tests started to time out. This was not good because the users were unable to interact with the web application, or if they did it was dead slow (minutes to load a page).
Using observability tooling I was quickly able to see that they had run out of worker threads and that requests were now being queued beyond the timeout of the synthetic agent. The fix was a simple, increase the number of worker threads to enable requests to be handled in parallel rather than waiting for a thread to become free.
The increase from 25 to 100 threads immediately increased responsiveness of the application back to within the SLA that the application team had promised to the business.
So Why did I recommend increasing the number of threads from 25 to 100?
If you’ve ever managed a webserver and seen the max connections or worker threads settings, you might be tempted to think that bigger is better. But there are a number of factors that need to be considered before blindly increasing the number of threads.
When things start to become “slow” as an Observability and Digital Performance expert, I need to consider the type of workload, the utilisation of resources (such as CPU, memory, and Storage IO), and errors/events that might be occurring. I will then leverage APM traces to understand where time is being spent in the code or even the application server.
In this case all threads were being consumed however not all CPU cores were being consumed. This led me to start looking at traces, and what I saw was that the actual application response time was quick. This means that when the request actually got to application code, it was executed very quickly. The time was being spent in the application server (Tomcat in this case) which was queueing requests, but unable to have the thread pool execute them quickly.
So when the code is executing quickly but is held in a queue waiting. so if everything is being executed quickly but requests are timing out, it means that we need a way to increase the number of requests being executed simultaneously, with the side effect of increasing the time it takes for each request taking slightly longer to execute. If we have an equal number of workers to cpu cores a single thread can have effectively uncontended access to a CPU core, however if we increase the number of threads beyond the number of cores, we have to rely on the operating system scheduler to schedule access to the required CPU core.
Additionally, as we increase the number of worker threads, we also increase the likely of issues relating to concurrency (locks, race conditions), as the increased number of threads will also take longer to execute their workload.
Using NGINX as an example, it recommends setting the number of works to the number of cores or auto if in doubt(2). I’m going to use a benchmarking tool called Apache Benchmark against a webserver that has two cores and two workers to calculate the first 1000 prime numbers.
Test 1– 1 Concurrent Request
In this test we have two worker threads and one concurrent request. We see that the mean response time is 620ms. Not bad for ten requests with the total time to process ten requests at 6.197 seconds.
root@client:~# ab -n 10 -c 1 http://192.168.20.17/index.php
Time taken for tests: 6.197 seconds
Requests per second: 1.61 [#/sec] (mean)
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 606 620 37.7 608 727
Waiting: 605 620 37.7 608 727
Total: 606 620 37.7 608 727
Test 2 – 2 Concurrent Requests
In this test we have two worker threads and two concurrent requests. We see that the mean response time is 624ms. Pretty comparable to the previous test however the the total test time was reduced to 3.7 seconds.
root@client:~# ab -n 10 -c 2http://192.168.20.17/index.php
Time taken for tests: 3.748 seconds
Requests per second: 2.67 [#/sec] (mean)
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 607 624 12.4 624 652
Waiting: 607 624 12.3 624 652
Total: 607 624 12.5 625 652
Test 3 — 4 Concurrent Requests
In this test we only have one worker thread and four concurrent requests. We see that the mean response time increase to 1162ms. This is roughly doubling the request duration, however the total time taken to serve the ten requests was almost the same as test two at 3.8 seconds.
Doubling the number of concurrent requests to 8 shows that the response time increase is roughly linear.
root@client:~# ab -n 10 -c 4 http://192.168.20.17/index.php
Time taken for tests: 3.821 seconds
Requests per second: 2.62 [#/sec] (mean)
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 691 1162 319.7 1205 1748
Waiting: 691 1160 317.0 1205 1737
Total: 691 1162 319.6 1205 1748
Test 4 – 4 Concurrent Requests And 4 workers
This test is to oversubscribe the number of worker threads to CPU cores by double, relying on the operating system scheduler to load balance the requests.
The performance was comparable (slightly worse by ~100ms) to test three relying on the OS scheduler to load balance across the two cores.
root@client:~# ab -n 10 -c 4 http://192.168.20.17/index.php
Time taken for tests: 3.978 seconds
Requests per second: 2.51 [#/sec] (mean)
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 621 1205 304.1 1280 1483
Waiting: 621 1205 304.2 1280 1483
Total: 621 1205 304.1 1280 1483
Conclusion
Overall the best performance was two workers with two concurrent requests lining up with the general advice of equal number of workers to cores, however this workload fully utilises (prime number generation) the CPU core while it runs. Other workloads will use require less CPU time whilst waiting on dependancies (e.g. DB calls), and this will mean that over-subscribing worker threads will improve results. So Like everything in IT the correct value is: “it depends” and “bigger is not necessarily better“.
If you made it this far, thanks for reading. Check out the book section for interesting books on Observability.
- This is the best way to ruin a weekend for your hard working staff. Read only Fridays make for happy engineers.
- https://nginx.org/en/docs/ngx_core_module.html#worker_processes
Leave a Reply