Using Web Server 7 with Microsoft Active Directory

Among the many new security-related features in Web Server 7 are a few new configuration elements for the LDAP auth-db (authentication database).

Here is a summary:

search-filter [optional] The search filter to use to find the user. The default is uid.
group-search-filter [optional] The search filter to find group memberships for the user. The default is uniquemember.
group-target-attr [optional] The LDAP attribute name that contains group name entries. The default is CN.

One use case for these configurable search options is to interoperate with Microsoft Active Directory (MSAD). The problem with MSAD is that user ids are not kept (by default) in the usual uid attribute. For this reason, when the LDAP auth-db attempts to search a MSAD directory to find a user, it will never be able to obtain a match since it is attempting to match on the uid attribute.

In 7.0 we can now set the search-filter attribute to override the usual default. In MSAD the user is kept in an attribute called samAccountName. Here is a sample LDAP auth-db configuration for MSAD (showing a minimal configuration, other options can of course be specified as usual):

<auth-db>
	<name>ldapMSAD</name>
	<url>ldap://crashbox.sfbay/dc=sfbay,dc=sun,dc=com</url>
	<property>
		<name>search-filter</name>
		<value>samAccountName</value>
	</property>
</auth-db>

P.S. Of course, I should probably point out that a better solution is to simply upgrade to Sun’s own Directory Server instead!

Posted in Sun

Web Server 7 ECC Performance Notes

As I have mentioned earlier, the upcoming Web Server 7 will include ECC support.

While relative performance predictions comparing RSA and ECC are available in various papers, I was curious to get a glimpse into how it performed in practice in our web server. So, I did a few runs and graphed the results below.

The X axis corresponds to the percentage of new TLS session handshakes during the run (several thousand requests). If a single client were issuing all the requests it would perform one handshake during the initial connection and reuse that TLS session for all remaining requests. As one could expect, in this case there isn’t really any difference between the algorithms and keysizes since no matter how fast or slow the very first connection was, it is a minute portion of the total runtime. At the other extreme is the case of 100% new handshakes – every request comes from a new client and that client doesn’t reuse the session again.

Neither of these extremes is realistic for web server traffic, of course. Normal usage patterns will fall somewhere in between.

The following table shows approximate equivalency in strength between RSA and ECC, to provide some context to the results above:

RSA ECC
1024 160
2048 224
3072 256
7680 384
15360 521

So, we can see that while 1024 bit RSA isn’t too much of a performance burden even if our server experiences lots of new handshakes, things look quite different at 2048 bit RSA. And 4096 bit RSA is nearly off the chart. On the other hand, while ECC with the nistp256 curve is roughly equivalent in strength to 3072 bit RSA, it performed faster than 2048 bit RSA. Not bad.

The higher key length won’t be so interesting for years to come (barring unforeseen advances) but it is interesting to note how the performance compares as keysizes grow. ECC with nistp521 is substantially faster then 4096 bit RSA, even though it is roughly equivalent to 15360 bit RSA in strength!

Note: I ran the web server on a single CPU single core server for these tests. Such machines are hard to come by these days.. so I ran it on a very old box I had sitting around. The absolute numbers aren’t very interesting so I left the Y axis numbering out.

Posted in Sun

Secure Password Storage

If you are using SSL with your Web Server 6.1, your server has one or more private keys. These keys are kept in the NSS database and they are encrypted. In order for the server to read its own keys it will need to decrypt this store, for which it will need the password to the NSS database. When the server is started, if SSL is needed, the server prompts for the password it needs, which is then used to unlock the NSS database.

Often, this is inconvenient. After all, servers need to start unattended. In that case, the only solution is to store the required password somewhere so the server can automatically get it during startup.

This is handled in 6.1 by storing the password in a file called password.conf in the config directory. This file is owned and readable only by the web server process, so it is not possible for other users on the system to get at it.

However, the operating system filesystem permissions are sometimes seen as being too weak. An attacker who manages to crack or bypass the file protections will be able to obtain the cleartext password, which is a problem.

Let’s see how we can improve this situation.

I’ll make a small modification to the start script so it obtains the password by invoking an executable (I’ll call it wsgetpwd) instead of prompting interactively:

99c99,101
<               ./$PRODUCT_BIN -r $SERVER_ROOT -d $INSTANCE_CONFIG_DIR -n $IN
STANCE_NAME $@
---
>               PWD=`wsgetpwd $INSTANCE_CONFIG_DIR`
>               echo $PWD | ./$PRODUCT_BIN -r $SERVER_ROOT -d $INSTANCE_CONFIG_D
IR -n $INSTANCE_NAME $@

Important Note: The content of the start script is not a public interface. This means any changes to it are unsupported and it also means you cannot expect any such changes to continue working after a service pack or version upgrade. You’ve been warned. No production servers were harmed in the writing of this article.

Ok, with this tiny bit of infrastructure in place we can experiment with various implementations of wsgetpwd until we find something superior to keeping the cleartext password in password.conf.

Let’s start simple to see if it works. I create a file called password in the instance config directory to contain the password and implemented wsgetpwd to simply print it out:

% rm -f config/password.conf
% echo password > config/password
% chmod 600 config/password
% cat ../bin/https/bin/wsgetpwd
cat $1/password

Starting the server shows that it works fine. So far so good. We haven’t accomplished much yet though – if our attacker can bypass the filesystem permissions on password.conf, they might also bypass the permissions on the new password file.

So, let’s improve wsgetpwd. This time I’ll obfuscate the password using base64 (btoa/atob are small utils from NSS which do this encoding) so it is no longer human-readable.

% echo password | btoa > config/password
% cat config/password
cGFzc3dvcmQK
% cat ../bin/https/bin/wsgetpwd
cat $1/password | atob

Now, even if an attacker manages to read the password file, they’ll only get “cGFzc3dvcmQK” which doesn’t really do them much good (unless they know about atob or base64, but how likely is that?)

Nonetheless, it’s been suggested that the password will be safer if it is encrypted with a proper encryption algorithm. Encryption is really hard to break, so that will certainly improve the security even further. I’ll use encrypt(1).

Some prep work is needed first. I’ll use AES encryption and I’ll use /dev/random to obtain bits for the key and then I’ll encrypt the password with that key into the password file. Finally, I reimplement wsgetpwd to decrypt the password at runtime.

% dd if=/dev/random of=config/encrypt.key bs=1 count=16
% chmod 600 config/encrypt.key
% echo password | encrypt -a aes -k config/encrypt.key > config/password
% cat config/password | od -x
0000000 0000 0100 0000 e803 058c 6f08 5b48 c607
0000020 3517 7fc5 65ce c64e 95ff 576d ee95 4cb4
0000040 e990 5834 df32 9042 df90 9937 47f4 a464
0000060 2720 f8db ad0a d089
0000070
% cat ../bin/https/bin/wsgetpwd
decrypt -a aes -k $1/encrypt.key -i $1/password.aes

Start the server and.. it works! The password is now encrypted with AES on disk and the server is still able to start automatically. When our attackers crack the filesystem permissions on the password file all they will get is the encrypted bits (shown by the od -x output above) – the cleartext password remains secure.

If you’ve read this far, one final thought: Should I file this article under “Web Server Security” or under “Light Comedy”?

Posted in Sun

Self-signed SSL Certificates in Web Server 6.1

When working with SSL-enabled web servers it is often useful to create self-signed certificates for testing and development. This is much quicker and more convenient than going through an external CA when all you need to do is run some tests on your development machine. Unfortunately Web Server 6.1 (formally, Sun Java System Web Server 6.1 (which you may also have met under the SunONE or the older iPlanet brand names)) does not support creating self-signed certificates through the admin UI. On the bright side, it is actually quite easy to create these certificates using the NSS tool certutil.

First: Create the NSS databases

You can do this through certutil, but let’s do it through the supported admin UI interface. “Servers” -> “Manage Servers” -> select server, click ‘Manage’. Then click “Security” tab. Enter password twice into the fields and submit. Popup says “Success”! -> Click OK.

Second: Create a local CA

Despite the title of this entry, instead of directly creating a self-signed cert, I’ll first create a local CA for myself and then use it to sign the server cert, so I can demonstrate both possibilities. If you prefer, skip the hierarchy and generate a self-signed server cert directly.

Go to the alias directory under the install root. That is where the NSS database files live in 6.1. At this point you should have at least the files shown below in the alias directory.

Note that boqueron.virkki.com here is the host and domain name, your files will have names corresponding to your installation. For all subsequent commands in this example, substitute the corresponding names for your instance in place of these.

$BASE/alias% ls -1
https-boqueron.virkki.com-boqueron-cert8.db
https-boqueron.virkki.com-boqueron-key3.db
secmod.db

You can use certutil -L to list all the certs in the database. If you just created the database through the UI like I did, it’ll be empty:

$BASE/alias% certutil -L -d . -P "https-boqueron.virkki.com-boqueron-"
certutil -L -d . -P "https-boqueron.virkki.com-boqueron-"

Now, I will create a sample CA with certutil (note this is a single command line, which I’ve split here only for readability). I have ommitted some of the output for brevity:

$BASE/alias% certutil -S  -P "https-boqueron.virkki.com-boqueron-"
   -d . -n SelfCA -s "CN=Self CA,OU=virkki.com,C=US" -x -t "CT,CT,CT"
   -m 101 -v 99 -5

Generating key.  This may take a few moments...

                          0 - SSL Client
                          1 - SSL Server
                          2 - S/MIME
                          3 - Object Signing
                          4 - Reserved for futuer use
                          5 - SSL CA
                          6 - S/MIME CA
                          7 - Object Signing CA
                          Other to finish

Enter 5 since we want a CA.

                          0 - SSL Client
                          1 - SSL Server
                          2 - S/MIME
                          3 - Object Signing
                          4 - Reserved for futuer use
                          5 - SSL CA
                          6 - S/MIME CA
                          7 - Object Signing CA
                          Other to finish

Enter 9 to end.

Is this a critical extension [y/n]?

Enter y.

Third: Use this local CA to sign your server cert

$BASE/alias% certutil -S  -P "https-boqueron.virkki.com-boqueron-"
   -d . -n MyServerCert -s "CN=boqueron.virkki.com,C=US" -c SelfCA -t "u,u,u"
   -m 102 -v 99 -5

Generating key.  This may take a few moments...

                          0 - SSL Client
                          1 - SSL Server
                          2 - S/MIME
                          3 - Object Signing
                          4 - Reserved for futuer use
                          5 - SSL CA
                          6 - S/MIME CA
                          7 - Object Signing CA
                          Other to finish

Enter 1

                          0 - SSL Client
                          1 - SSL Server
                          2 - S/MIME
                          3 - Object Signing
                          4 - Reserved for futuer use
                          5 - SSL CA
                          6 - S/MIME CA
                          7 - Object Signing CA
                          Other to finish

Enter 9 to end.

Is this a critical extension [y/n]?

Enter y.

Try certutil -L again, this time you’ll see both your CA and your server cert:

$BASE/alias% certutil -L -d . -P "https-boqueron.virkki.com-boqueron-"
certutil -L -d . -P "https-boqueron.virkki.com-boqueron-"
MyServerCert                                                 u,u,u
SelfCA                                                       CTu,Cu,Cu

Also try looking at them from the admin UI. Under “Security” ->”Manage Certificates”, you will see these newly created certificates listed.

That’s it! You can now assign the MyServerCert to any of your SSL-enabled listeners.

For example, if you want to change one of your non-SSL listeners to enable SSL and use the new certificate, you can follow this sequence in the admin UI: “Preferences” -> “Edit Listen Socket” -> click on an ls ID to edit. Then check the security box and click OK. Once again, “Edit Listen Socket” -> click on the same ls. This time you see SSL options, change any if desired and then click OK. Finally Apply to apply the changes.

I used quite a few options to certutil which I didn’t describe in any detail to keep this brief. Run certutil -H to read the description of every option I used (and the ones I didn’t) so you can tailor the options to your needs.

Posted in Sun

Using MySQL with Web Server 7

A while ago I worked for a bit on some Java servlet code which needed to talk to a MySQL backend database. Diverging a bit from my usual topics I thought I’d write this down in case it is useful for others (or at least it’ll be useful for me next time I need to do the same).

This configuration assumes Web Server 7. In server.xml I configured a JDBC resource for mysql:

  <jdbc-resource>
    <jndi-name>jdbc/mysql</jndi-name>
    <datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource</datasource-class>
    <min-connections>1</min-connections>
    <max-connections>5</max-connections>
    <property>
      <name>password</name>
      <value>password-here</value>
    </property>
    <property>
      <name>user</name>
      <value>jyri</value>
    </property>
    <property>
      <name>url</name>
      <value>jdbc:mysql://boqueron/dbname</value>
    </property>
  </jdbc-resource>

Then I copied mysql-connector-java-3.1.12-bin.jar into $INSTANCE/lib (where $INSTANCE is the top level directory of the specific instance, note that you may need to create the lib directory there if it hasn’t been needed before).

In the web application itself, in web.xml, I configured:

 <resource-ref>
    <description>JDBC Connection Pool</description>
    <res-ref-name>jdbc/mysql</res-ref-name>
    <res-type>javax.sql.DataSource</res-type>
    <res-auth>Container</res-auth>
  </resource-ref>

And in sun-web.xml:

  <resource-ref>
    <res-ref-name>jdbc/mysql</res-ref-name>
    <jndi-name>jdbc/mysql</jndi-name>
  </resource-ref>

That’s it for the configuration. In my servlet code:

    Context initContext = new InitialContext();
    Context webContext = (Context)initContext.lookup("java:/comp/env");

    DataSource ds = (DataSource) webContext.lookup("jdbc/mysql");
    Connection conn = ds.getConnection();

    Statement stmt = conn.createStatement();
     ....
Posted in Sun

More on Observing SSL Requests

Earlier I have coveredseveral different methods of approaching the problem of diagnosing HTTP (or even other protocol) traffic obscured within SSL/TLS.

There’s another approach that I haven’t mentioned yet which I use every now and then, whenever I want to look at what exactly the browser is sending. Check out tamperdata, a firefox extension which can alter the requests from the browser. While its main purpose is presumably to alter the data (which, as an aside, can be very useful for testing server and server application behavior in many ways) it can just as well be used merely for observing said data.

Of all the methods I’ve discussed this one is the most convenient as there is really nothing to do beyond installing the extension. Whenever you wish to observe the interaction just open the tamperdata window (from Tools menu) and watch away. The limitation, of course, is that it’s only good for watching interaction originating from your single browser session. But often that is all you need.

Posted in Sun

Dynamic CRL Updates in Web Server 7.0

Every now and then I get requests on how to load one or more CRLs (Certificate Revocation List) into the web server without going through the admin GUI (in 6.1, only the GUI has direct support for updating CRLs). Clearly this is a useful goal since going through the GUI would require manual intervention, hardly practical for most web servers.

It is possible to do this in 6.1 via crlutil. However, there’s a potentially big catch – it is necessary to restart the server instance(s) to pick up the new CRL(s). Depending on how often the CRLs are refreshed, how many CA (Certificate Authority) CRLs need to be refreshed and the site’s uptime requirements, this can be a problem.

One of the many nice features of 7.0 (preview here) is that it allows CRLs to be loaded dynamically at runtime.

Every server instance can define a directory from where CRLs will be loaded, as follows, in server.xml (if you have multiple instances you’ll most likely want to share this location amongst them):

  <pkcs11>
     <crl-path>/export/crls</crl-path>
  </pkcs11>

7.0 also introduces a mechanism for running recurring events. This can be used (among other things – but a full description is a topic for another time) to refresh the CRLs. The following will check every minute (60 seconds) to see if any new CRLs are available:

  <event>
    <interval>60</interval>
    <update-crl/>
  </event>

And that’s all there is to it. You can copy new CRLs as they become available into your crl-path and the server will pick them up automatically. I recommend setting up a simple script which will obtain the new CRLs from your CA periodically and copy them into crl-path; you can call this from cron at desired intervals. Also look into the <time> element within <event>, it can be a better fit if you set up refreshes at fixed times.

As a side note, while experimenting with WS 7.0 configuration it is handy to edit server.xml manually. However, if you end up wanting to automate configuring your instances, always go through wadm instead. You definitely don’t want to do something fragile like writing perl scripts to directly manipulate server.xml – such code is likely to break when you upgrade to a future version of the server. Instead, write scripts around wadm. That way wadm will shield your code from any internal configuration changes. In that spirit, take a look at the following wadm commands: list-crl, install-crl, delete-crl, get-crl-prop, list-events, create-event, delete-event, get-event-prop, enable-event, disable-event.

Posted in Sun

A Peek Into Web Server 7.0 Files

An exciting week in the land of web server – if you’ve visited our Java One booth or been following the blog you already know that a preview of Sun’s JES Web Server 7.0 is now available for you to download.

7.0 is a new Major release and there is quite a bit new and improved indeed. If you are used to operating through the admin GUI the major change you will notice very quickly is the all-new admin.

On the other hand, if you’re the type to go looking at the actual installation, the major change you’ll notice right away is that the file layout is quite a bit different from previous releases.

Let me digress for a moment… One of my jobs here at Sun is as a member of Sun’s ARC (Architecture Review Committee). John Plocher has written to some length about Sun’s ARC process so I won’t repeat too much of that. One of the many goals of the ARC process is to make sure there is clarity as to which interfaces customers can rely on.

As long as you rely only on public Stable interfaces you will be assured that your code will continue to work in future releases (this is why you can take binaries you might’ve compiled on Solaris over 10 years ago and they still run on Solaris 10!). If, however, you build a dependency on any undocumented (private) interface, you must assume your code will break at any time without warning (usually at the worst possible time). Clearly, not a good idea.

(Please note that the word interface is used in a very broad sense. Anything you might depend on is an interface. APIs are interfaces, but so are CLI names and options, file locations, some output and so forth. Anything you might be able to build a dependency on can be an interface. Note also that Stable interfaces may in some cases also change – but only in Major releases with advance notice, so you’d have plenty of time to adjust.)

Why do I bring this up? Well, I’ll give a quick view of the file layout changes in 7.0 and I’ll also say a bit about where things fall with respect to their interface classification.

The web server file layout up to 6.1 harks back to the old iPlanet days and it had grown a bit messy over the years. In 7.0 things have been reorganized and cleaned up. Also, the layout makes it a bit easier for you to identify what you can rely on and what is off-limits.

If you look at the top-level installation directory here are a few important areas to note:

Directory Comments
bin This contains public binaries you can rely on. You can build
shell or other scripts using the commands you find here.
include This is a public directory containing include files you may need
when compiling code for use with the server.
plugins This is also a public directory, containing plugins you may
interact with directly.
samples Here are code samples that demonstrate use of various features.
While this is (of course!) public for you to use, note that samples
are Unstable (which makes sense – the purpose of samples is to
showcase features, not to be relied on as-is).
admin-server Unlike 6.1 where the admin server instance was named with the same pattern as regular instances, in 7.0 the admin server is always under this directory. Within admin-server, you can access the scripts in bin and the files in logs. Other subdirectories contain private implementation detail only.
https-* These are the server instances. As with admin-server, the bin and logs subdirectories are public. Unlike 6.1, the docs (default docroot) is unique to each instance and can be found here. Another major cleanup from 6.1 is that all the instance configuration lives in one place within the config directory – no more hunting for instance configuration among many top-level directories!

An important note about configuration files: while you may edit these manually with emacs (or lesser editors ;-), do not establish programmatic dependencies (e.g. sed scripts) on these. For scripting changes, go through wadm, the all-new CLI included in 7.0. By doing this you will isolate your scripts from any future config file format changes.

lib This is a private directory. Its contents are not available for direct consumption, it only contains private implementation detail.

That’s the quick tour of the 7.0 file layout (I didn’t cover every detail here, but enough to get you started). Download the bits and play with them to see what else is new.

I should point out that while the interface details I’ve given here are unlikely to change significantly from now until the product is released, they may of course change. The current preview build is just a preview, after all, still subject to change. For the same reason, I didn’t write down precise interface levels, just a general description of each area. The formal product documentation which will be available when the production releasecomes out will contain the authoritative interface information.

Posted in Sun

Using ssldump

Earlier I’ve talked about various ways of observing and debugging SSL connections to the web server, such as acting as a client and acting as a proxy.

Both of these methods have their advantages and disadvantages. Today I’ll review using ssldump, which combines some of the advantages of both of the previous tools: you can observe real requests from a real client and you can also observe the actual application data (HTTP layer). The only drawback is that there is a bit of additonal setup work to do before this will work, but it is definitely worth it.

Unlike ssltap, ssldump observes network traffic directly. So it is not necessary to proxy the client requests through it, just point it at a network interface where the traffic of interest is visible. For the easiest use case you can simply run:

# ssldump -i bge0 -d port 8088
New TCP connection #1: myclient(34286) <-> myserver(8088)
1 1  0.0265 (0.0265)  C>S  Handshake
      ClientHello
        Version 3.0
        resume [32]=
          08 22 e7 bc cf 13 e7 7f 80 0d 62 43 24 4c 65 5b
          1e 19 69 ab 3c 51 0e 95 29 d9 79 9d 9f 79 04 92
        cipher suites
        Unknown value 0x39
        Unknown value 0x38
        Unknown value 0x35
        Unknown value 0x33
        Unknown value 0x32
        SSL_RSA_WITH_RC4_128_MD5
        SSL_RSA_WITH_RC4_128_SHA
        Unknown value 0x2f
        SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA
        SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA
        Unknown value 0xfeff
        SSL_RSA_WITH_3DES_EDE_CBC_SHA
        SSL_DHE_DSS_WITH_DES_CBC_SHA
        Unknown value 0xfefe
        compression methods
                  NULL
1 2  0.0273 (0.0007)  S>C  Handshake
      ServerHello
        Version 3.0
        session_id[32]=
          08 22 e7 bc cf 13 e7 7f 80 0d 62 43 24 4c 65 5b
          1e 19 69 ab 3c 51 0e 95 29 d9 79 9d 9f 79 04 92
        cipherSuite         SSL_RSA_WITH_RC4_128_MD5
        compressionMethod                   NULL
1 3  0.0273 (0.0000)  S>C  ChangeCipherSpec
1 4  0.0273 (0.0000)  S>C  Handshake
1 5  0.0635 (0.0362)  C>S  ChangeCipherSpec
1 6  0.0635 (0.0000)  C>S  Handshake
1 7  0.0635 (0.0000)  C>S  application_data
1 8  0.0643 (0.0008)  S>C  application_data
1 9  30.1176 (30.0532)  S>C  Alert

Note that I ran ssldump as root since it needs to listen to the traffic on the network device bge0 (substitute the correct device for your system here – check with ifconfig if in doubt). My SSL-enabled web server listener is on port 8088 for this example, so I limited ssldump to tracking traffic on that port (I could’ve said “dst port 8088” as well), otherwise it would’ve dumped all network traffic on that device, making it difficult to follow the output.

Also check the first column in the output, which above is always “1”. Since ssldump observes the network traffic there may be multiple requests going to the server at once. Above there was only one:

New TCP connection #1: myclient(34286) <-> myserver(8088)

In the case where there are multiple connections observed, each will be numbered. The packets for each connection will most likely be interleaved in the output, but you can track each one by correlating these numbers. That can be quite useful – I’ve used ssldump to diagnose some SSL connectivity problems which were only reproducible when multiple concurrent requests hit the server all at once.

The example above is nice in that I didn’t have to proxy my browser traffic through it, I could just connect to my server as usual. But aside from that it is not so interesting. The output isn’t really any more useful than ssldump.

So, let’s make things more interesting.

If given access to the server private key, ssldump can decrypt the traffic to and from that server on the fly. That’s where it gets really useful. We’ll need to do a bit of prep work to set this up.

First, extract the private key from the server instance into a PKCS#12 format file using pk12util.

  • You’ll need to know the nickname of the server keypair/cert (see your server.xml) for the -n parameter.
  • I changed to the directory where the NSS *.db files live so I type “-d .”. Alternatively you could run the command from elsewhere by giving the right path.
  • Finally, I am runing JES Web Server 7.0 so there is no prefix to the NSS files, but if you are on 6.1 you’ll need to give a -P parameter with the right prefix for that instance.
% pk12util -o myserver.pk12 -n Server-Cert -d . -v
Enter Password or Pin for "NSS Certificate DB":
Enter password for PKCS12 file:
Re-enter password:
pk12util: PKCS12 EXPORT SUCCESSFUL

Perhaps needless to say, but if this was a production server I’d have to be very careful where I store this private key file. Keep that in mind.

Next I’ll just convert this to a format suitable for ssldump using openssl:

% openssl pkcs12 -in myserver.pk12 -out myserverkey
Enter Import Password:
MAC verified OK
Enter PEM pass phrase:
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:

Ok.. now lets run ssldump and give it access to the server key material (the “PEM pass phrase” you need to type in below is the same you gave it above):

# ssldump -i bge0 -d -k /tmp/myserverkey port 8088
Enter PEM pass phrase:

Then I connect to my server from a browser:

New TCP connection #1: laptop(39699) <-> myserver(8088)
1 1  0.0853 (0.0853)  C>S SSLv2 compatible client hello
  Version 3.1
  cipher suites
  Unknown value 0x39
  Unknown value 0x38
  Unknown value 0x35
  Unknown value 0x33
  Unknown value 0x32
  TLS_RSA_WITH_RC4_128_MD5
  TLS_RSA_WITH_RC4_128_SHA
  Unknown value 0x2f
  TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
  TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
  Unknown value 0xfeff
  TLS_RSA_WITH_3DES_EDE_CBC_SHA
  TLS_DHE_RSA_WITH_DES_CBC_SHA
  TLS_DHE_DSS_WITH_DES_CBC_SHA
  Unknown value 0xfefe
  TLS_RSA_WITH_DES_CBC_SHA
  TLS_RSA_EXPORT1024_WITH_RC4_56_SHA
  TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA
  TLS_RSA_EXPORT_WITH_RC4_40_MD5
  TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
1 2  0.0856 (0.0002)  S>C  Handshake
      ServerHello
        Version 3.1
        session_id[32]=
          08 22 c2 25 34 4e 85 61 dd 24 ba 9a 59 a2 dc b0
          77 a0 3f b7 ac c9 d3 ce 76 4a b5 42 cc 44 30 fb
        cipherSuite         TLS_RSA_WITH_RC4_128_MD5
        compressionMethod                   NULL
      Certificate
      ServerHelloDone
1 3  6.1870 (6.1013)  C>S  Handshake
      ClientKeyExchange
1 4  6.1870 (0.0000)  C>S  ChangeCipherSpec
1 5  6.1870 (0.0000)  C>S  Handshake
      Finished
1 6  6.1931 (0.0061)  S>C  ChangeCipherSpec
1 7  6.1931 (0.0000)  S>C  Handshake
      Finished
1 8  6.2852 (0.0921)  C>S  application_data
    ---------------------------------------------------------------
    GET / HTTP/1.1
    Host: myserver:8088
    User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.7.5)
 Gecko/20041217
    Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/
plain;q=0.8,image/png,*/*;q=0.5
    Accept-Language: en-us,en;q=0.5
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive

    ---------------------------------------------------------------
1 9  6.2859 (0.0007)  S>C  application_data
    ---------------------------------------------------------------
    HTTP/1.1 403 Forbidden
    Server: Sun-Java-System-Web-Server/7.0
    Date: Thu, 27 Apr 2006 08:12:15 GMT
    Content-length: 142
    Content-type: text/html

    Forbidden


Forbidden

Your client is not allowed to access the requested object. -------------------------------------------------------------- - 1 10 12.0328 (5.7468) C>S Alert level warning value close_notify 1 12.0360 (0.0032) C>S TCP FIN 1 12.0361 (0.0000) S>C TCP FIN

Nice! Now I can watch all the application data traffic in plaintext. As you can imagine, this can be very useful for diagnosing all kinds of problems with an SSL-enabled server.

As a final note, when building ssldump be sure to build it with openssl support or the traffic decryption will not work. Here’s the configure options I gave it on my Solaris 10 machine:

./configure --with-pcap=/opt/sfw --with-openssl=/usr/sfw
Posted in Sun