More on Request Limiting

Continuing with my coverage of check-request-limits (see request limiting and concurrency limiting), today I’ll give a few examples using the <If> tags introduced in Web Server 7.0.

As I showed in the previous articles, using server variables with the monitor parameter can be quite flexible. However, there are times when you may want to apply check-request-limits in ways which cannot be expressed using monitor parameter values. Check out the <If> expressions also introduced in Web Server 7.0.

Let’s look at an example. Say you want to apply limits only to request paths which end in *.jsp. You could write:

<If $path = "*.jsp">
PathCheck fn="check-request-limits" max-rps="10"
</If>

Simple enough! There are some pitfalls to watch out for, though. Take a look at this example:

<If $path = "*.pl">
PathCheck fn="check-request-limits" max-rps="100"
</If>
<If $path = "*.jsp">
PathCheck fn="check-request-limits" max-rps="10"
</If>

At first glance one might think this limits all “*.pl” paths to 100rps and all “*.jsp” paths to 10rps. Not so! Recall that request counts and averages are tr
acked separately for each value of the monitor parameter (thus, for example, when monitor=”$ip” counts are kept separately for every client IP). In the above two invocations of check-request-limits there are no monitor parameters. So where are the counts kept? When the monitor parameter is not given, counts are kept in a default unnamed slot.

By now you can probably see the issue… the counts for the two calls above are kept in the same counter. So, whenever a “*.pl” request is processed, the counter increases. But this same counter is the one used for “*.jsp” requests! If the server is processing an average of 20rps worth of “*.pl” requests, no requestfor “*.jsp” will ever be serviced… Not quite what I wanted!

Fortunately, this is easy to correct by simply tracking each type of request separately, using the monitor parameter:

<If $path = "*.pl">
PathCheck fn="check-request-limits" max-rps="100" monitor="pl"
</If>
<If $path = "*.jsp">
PathCheck fn="check-request-limits" max-rps="10" monitor="jsp"
</If>
Posted in Sun

Web Server 7.0 Concurrency Limiting

Last week I talked about the new request limiting feature in Web Server 7.0. I’ll expand on this topic today by covering the concurrency limiting feature of check-request-limits.

Once again let’s start with the simplest possible usage:

PathCheck fn="check-request-limits" max-connections="2"

The max-connections option tracks instantaneous connections, as opposed to averages over time as max-rps did. The above tells check-request-limits that I want to allow only 2 simultaneous requests being processed at any one time. If two requests are being serviced and a third one arrives, the third one will be rejected with the desired error code (as before, the default is 503).

As before, this minimal example isn’t really useful. For one thing, I probably never want to limit the entire server to just two simultaneous requests. As with max-rps, the use cases become more interesting when the monitor parameter is introduced:

PathCheck fn="check-request-limits" max-connections="2" monitor="$ip"

Now I’m limiting the server to only process two simultaneous requests from any one client (by IP) but there is no limit to the number of distinct clients which can be serviced at the same time (well, there are other limits which apply such as server capacity and worker threads ;-)

I won’t repeat the various examples I gave last time, but suffice to say that all of them can be used with max-connections just as well as with max-rps. You can use any server variable or combinations of multiple variables to establish the domain over which the limits apply.

Note that if a request exceeds the limit it is rejected right away. If you wanted to limit the number of active requests being processed but allow additional ones to queue up you can do that by setting the maximum worker threads instead.

Let me know of any interesting and useful use cases for check-request-limits that you come up with… I’ll add some more examples and notes about it in a future entry. Have fun with it!

Posted in Sun

Web Server 7.0 Request Limiting

One of the cool new features in Web Server 7.0 is the check-request-limits SAF. In a nutshell, it can be used to selectively track request rates and refuse requests if limits are exceeded. While its primary purpose is to help protect against denial of service attacks consisting of high request rates, it is also useful in other scenarios where limiting request counts is of interest.

Here’s the simplest possible invocation:

PathCheck fn="check-request-limits" max-rps="10"

This instructs the SAF to track the average requests per second (rps) and refuse client connections if the average rps exceeds 10rps. The average rate is recomputed every 30 seconds (default interval) based on the number of requests received in those past 30 seconds. If the average rps exceeds 10, all subsequent requests are rejected with HTTP response code 503 (service unavailable), the default rejection response. Requests will continue to be rejected until the average rps falls below the threshhold of 10. Since the average is only recomputed once per interval, this means it’ll be at least one interval before normal service resumes. Naturally, these defaults can all be changed. The following invocation is equivalent to the prior one but shows the default values explicitly:

PathCheck fn="check-request-limits" max-rps="10" interval="30"
                                    continue="threshhold" error="503"

The other possibility for the continue option is “silence”. If set, the incoming request count must fall to zero (for an entire interval) before normal service resumes. You can use this if you want to force the offending requests to truly “go away” before allowing any more to be serviced.

Now, in most cases one would not use a line such as the above in a real server because it is tracking all requests globally (for that web server process) and that is overly heavy handed unless you really want to limit the entire server to such a low average request rate. Perhaps if it is a home server, but not in most cases.

You may have read about the server variables and <If> tag also introduced in Web Server 7.0. Let’s use some of those capabilities to make the request limiting more interesting:

PathCheck fn="check-request-limits" max-rps="10" monitor="$ip"

The “monitor” parameter is optional but in nearly every case you will want to give it a value. It instructs check-request-limits to track request statistics using separate counters for each monitored value. In the example above, separate stats will be kept for every client IP ($ip expands to the client’s IP) making a request to the server. That’s more like it! Now, any client which exceeds my set limit (10rps) will be refused service but all other clients continue to experience normal operation.

You can use any of the supported server variables as the value of “monitor”, of course. Another interesting one might be $uri:

PathCheck fn="check-request-limits" max-rps="10" monitor="$uri"

Here, instead of setting limits for each client, we set the limit for each URI on the server. Perhaps some areas of your server are harder hit and you wish to limit use of those while allowing normal servicing of other areas? The above directive will accomplish that.

In fact you can combine variable as well. This is also legal:

PathCheck fn="check-request-limits" max-rps="10" monitor="$ip:$uri"

Here the SAF will limit only specific clients which request the same URI(s) too frequently (over 10rps, that is) but all other URIs for those clients and all other clients continue to be serviced normally. Cool!

This functionality is fairly flexible so experiment with it for a bit. While these examples will get you started, I’ll describe a few other scenarios later on.

Here are links to relevant parts of the documentation:

Posted in Sun

My Filesystem Is Broken?

In my previous humorous take on password encryption I inserted a few phrases meant to highlight some of the issues. Last week I commented on one of them, this time I’ll take a look at another.

I mentioned that “An attacker who manages to crack or bypass the file protections will be able to obtain the cleartext password, which is a problem”. True enough, as far as the statement goes.

Imagine for a moment that this attacker has indeed managed to bypass the operating system’s file system permissions, so he can read the supposedly protected file and extract the password. The requested solution is to avoid storing the password (or, somewhow, store it encrypted) so that when the attacker breaks the file permissions he will not get the [cleartext] password. Attack foiled!

Or was it?

If faced with this situation, what would the attacker do next?

We know the web server still ultimately needs the passwords, so we know the data will exist within the process at some point even if it is not on disk (or only on disk encrypted). Since the attacker bypassed the file system protections, how about modifying libns-httpd40.so (the core implementation of the server) to insert malicious code which emails him the password once the server process obtains it? That’s just one example, there would be nearly endless injection points where the attacker could insert similar trickery if we assert that the file system permissions have been bypassed.

Fortunately the file system permissions aren’t quite so weak. While it is true that an attacker who breaks them could indeed then read passwords out of a file, it is also true that if the attacker has this power then the entire system is compromised beyond help.

Posted in Sun

Tea and No Tea

If you had the opportunity to play that classic game you probably eventually succeeded in having tea and no tea at the same time. Of course, those events took place in a universe which had the benefit of the Infinite Improbability Drive.

Some time ago I wrote a humorous take on encrypting passwords. Hopefully it was clear that I was poking fun at a few nonsensical implementations. However, every so often I get requests to implement something along the lines of what I described in that article.

There are two types of passwords handled by the web server. First, there are passwords which (one way or the other) will be sent by a client to the server and the server needs to (by various mechanisms) to validate whether it matches the stored password. Second, there are passwords which the web server process will itself need to know during its lifetime because it will interact with some other entity using a protocol which requires the web server to posess the password in the clear.

The first kind is used, for example, when authenticating web clients using HTTP Basic or Digest auth or Servlet FORM authentication. The server doesn’t need to know the actual password of that user. It only needs to know a one-way hash of the password in a suitable form. The suitable form can vary depending on the protocol but we can ignore the details for the moment – the high level point being that the server only needs a hash of the password, therefore it doesn’t need to store the clear text password nor does it need to be able to compute the clear text password (say, by decrypting) at any time.

The second kind is different. Above I defined these to be those which the server will need to know in the clear at some point. So the question is raised, how should these be stored? Storing one-way hashes of the password is clearly out, since the one-way-ness handily breaks the stated requirement of recovering the original password.

We can encrypt these passwords with a suitably strong reversible algorithm! In the previous article I wrote “encryption is really hard to break, so that will certainly improve the security even further”. I chose that wording to highlight the common misconception that encryption is a magic bullet that makes problems go away.

Unfortunately we also need to handle the key management issues if encryption is introduced. It is certainly possible to encrypt those passwords with a strong cipher such as AES and have the web server store only the encrypted data on disk. But we have already asserted that the web server process needs to obtain the clear text of those password at some point during the life of the process. How will the process do that? As long as it has the encryption key it can decrypt the data to obtain the passwords.

So where is that key coming from?

The are two possibilities:

  1. The key is not stored anywhere on disk; a human must enter it into the console when the server process requests it.
  2. The key is kept on disk in a form which allows the server process to obtain it programmatically, without human interaction.

The first choice has some real benefits. The key is never stored anywhere and the passwords are only stored on disk securely encrypted. Of course, there is a major drawback to this option – the server cannot start without the help of the human who needs to enter the key. The scenarios where this is practical are limited.

The second choice is the one I covered in the previous article.

Posted in Sun