Request Processing Capacity

Q: How many requests per second can the Web Server handle?

Short answer: It depends.

Long answer: It really depends on many factors.

Ok, ok.. sillyness aside, can we make any ballpark estimates?

The Web Server can be modeled as a queue. By necessity such modeling will be a simplification at best, but it may provide a useful mental model to visualize request processing inside the server.

Let’s assume your web application has a fairly constant processing time[1], so we’ll model the Web Server as a M/D/c queue where c is the number of worker threads. In this scenario, the Web Server has a maximum sustainable throughput of c / (processing time).

To use some simple numbers, let’s say your web app takes 1 second to process a request (that’s a very slow web application!). If the Web Server has c=128 worker threads, that means it can indefinitely sustain a max request rate of:

128/1 = 128 requests per second

This makes a lot of sense if we think about it:

  • At t = 0 seconds, 128 request come in and each one is taken by a worker thread, fully utilizing server capacity.
  • At t = 1 second, all those requests complete and responses are sent back to the client and at the same time 128 new requests come in and the cycle repeats.

At this request rate we don’t need a connection queue at all[2] because all requests go straight to a worker thread. This also means that at this request rate the response time experienced by the end user is always 1 second.

To expand on that, the response time experienced by the end user is:

end user response time = (connection queue wait time) + (processing time)

Since we’re not using the connection queue the end user response time is simply the same as the processing time[3].

So far so good. Now, what happens if the incoming request rate exceeds the maximum sustainable throughput?

  • At t = 10 seconds, 129 requests come in. 128 go straight to worker threads, 1 sits in wait in the connection queue.
  • At t = 11 seconds, 128 requests come in. 128 (the one which was waiting + 127 of the new ones) go straight to worker threads, 1 sits in wait in the connection queue.

The connection queue absorbs the bumps in the incoming request rate, so connections are not dropped and worker threads can remain fully utilized at all times. Notice that now out of every 128 requests, one of them will have a response time of 2 seconds.

So what happens next?

If we go back to receiving a steady 128 requests per second, there will always be one requests in the connection queue.

If at some point we only receive 127 requests (or less), the server can “catch up” and the connection queue goes back to staying empty.

On the other hand, if the incoming request rate remains at 129 per second we’re in trouble! Every second the connection queue waiting list will grow longer by one. When it reaches 129 entries, one end user will experience a response time of three seconds, and so on.

And of course, the connection queue is not infinite. If the max connection queue size is 4096 then 4096 seconds later it will fill up and from that point onwards, one incoming request will simply be dropped every second since it has no place to go. At this point the server has reached a steady state. It continues pr
ocessing requests at the same rate as always (128 per second), it continues accepting 128 of the 129 new requests per second and dropping one. End users are certainly unhappy by now because they are experiencing response times of over 30 seconds (4096 / 128 = 32, so it takes 32 seconds for a new request to work its way through the queue. Almost like going to the DMV…

If the incoming request rate drops below the maximum sustainable rate (here, 128/sec) only then can the server start to catch up and eventually clear the queue.

In summary, while this is certainly a greatly simplified model of the request queue behavior, I hope it helps visualize what goes on as request rates go up and down.

Theory aside, what can you do to tune the web server?

  • The single best thing to do, if possible, is to make the web app respond quicker!
  • If you want to avoid dropped connections at all cost, you can increase the connection queue size. This will delay the point where the server reaches a steady state and starts dropping connections. Whether this is useful really depends on the distribution of the incoming requests. In the example above we’ve been ass
    uming a very steady incoming rate just above the maximum throughput rate. In such a scenario increasing the connection queue isn’t going to help in practice because no matter how large you make it, it will fill up at some point. On the other hand, if the incoming request rate is very bumpy, you can damp it by using a
    connection queue large enough to avoid dropping connections. However… consider the response times as well. In the example above your end user is already seeing 33 second response times. Increasing the connection queue length will prevent dropped connections but will only make the response times even longer. At some point the user is simply going to give up so increasing the connection queue any further won’t help!

  • Another option is to increase the number of worker threads. Whether this will help or hurt depends entirely on the application. If the request processing is CPU bound then it won’t help (actually, if it were truly CPU bound, which is rare, then you’ll probably benefit from reducing the number of worker threads unle
    ss your server has 128+ CPUs/cores…) If the web app spends most of its time just waiting for I/O then increasing the worker threads may help. No set answer here, you need to measure your application under load to see.

[1] In reality the response time can’t be deterministic. At best it may be more or less constant up to the point where the server scales linearly but after that the response time is going to increase depending on load. On the flip side, cacheing might make some responses faster than expected. So M/D/c is certainly a
simplification.

[2] Not true for several reasons, but it’ll do for this simplified model and it helps to visualize it that way.

[3] Plus network transmission times but since we’re modeling only the web server internals let’s ignore that.