Wednesday, 10 October 2007

OpenID IdP for Cosign

I've been following OpenID's progress for a while - whilst there still aren't any "killer" applications making use of it, it is a very promising example of federated identity for the 'real' world. One of the potentials of running an internal authentication system is to use that to bootstrap an OpenID based on, so that whilst you're logged in to your organisation's system you can make use of an OpenID without requiring any additional authentication steps.

I spent a bit of time yesterday looking at the OpenID servers which are currently available. There isn't a huge amount of freely available server code - the easiest to modify appeared to be JanRain's PHP server, which is built on top of their general purpose PHP OpenID library. This server supports OpenID authentication, along with XRDS (a method for performing attribute exchange with OpenID enabled applications). However, it's designed to use an internal password database.

I've produced some patches to add a number of new features, allowing fallback against an enterprise authentication scheme.

  • When the ENTERPRISE_AUTHENTICATION define is set, if the web server provides a REMOTE_USER variable and the user exists in the local database, authenticate the user.
  • When the ENTERPRISE_AUTHENTICATION define is set, if REMOTE_USER is not set, remove any cached authentication information
  • When the AUTOMATIC_REGISTRATION define is set, and a REMOTE_USER doesn't exist in the local database, add them
  • When the login page is called, but a user has already logged in, just pass them on to the next stored action.

The patch is available from

The problem with this server is that it is all implemented through a single script. It isn't immediately apparent from the script which actions are expected to require authentication, and which are not. So, the scripts existing workflow is preserved. Cosign (our web authentication solution of choice) is configured so that it will provide REMOTE_USER information to the script where that is available, but won't prompt the user where it is not. This means that those portions of the script which should work for unauthenticated users will continue to do so, whilst those which require authentication redirect to the script's
?action=login handler. Secondly, Apache is configured using mod_rewrite so that requests for ?action=login are redirected to a Cosign protected location which always requires authentication. This triggers the usual Cosign authentication process, which eventually redirects back to the script itself. The change to the login page to accept pre-authenticated users then kicks in, and the script continues processing as usual.

The Apache configuration magic that accomplishes all of this is as follows:

Alias /iVouch/ /var/www/openid/src/
Alias /iVouch-login/ /var/www/openid/src/
php_value session.save_path /var/openid-session/

<Location /iVouch-login>
CosignProtected On
CosignGetKerberosTickets On

<Location /iVouch>
CosignProtected On
CosignAllowPublicAccess On

RewriteEngine On
RewriteCond %{QUERY_STRING} ^$
RewriteCond %{LA-U:REMOTE_USER} ^$
RewriteRule ^/iVouch/$ /iVouch-login/ [PT]

RewriteCond %{QUERY_STRING} action=login
RewriteRule ^/iVouch/$ /iVouch-login/ [PT]

RewriteCond %{QUERY_STRING} action=logout
RewriteRule ^/iVouch/$ [R]

This all assumes that the OpenID server is sitting under /iVouch/ on the web server - we'll probably move this to the top level if it ever goes into production. The first set of rewrite rules mean that if you go to the front page of the script you will get logged in. The second set of rules force a login when the scripts login action is performed. The third rule calls the central cosign logout function when the scripts logout action is reached.

Tuesday, 9 October 2007

kx509, kerberos and cosign

One of the things I've been doing over the summer is working on implementing some additions for our web authentication system. I thought I'd take a few moments to discuss these changes, and to describe the way that we're using them.

Historically, we used client certificates for web authentication. Generally speaking, these client certificates were obtained using the University of Michigan's kx509 system run transparently from the PAM stack at user login. When it worked, our users were unaware of a seperate authentication step to use web applications and all was well. However, we were (and are) seeing an increasing demand to provide web applications to clients that aren't under our control. These clients don't have the kx509 utilities installed, and don't have PAM (let alone having fancy things integrated into the stack). We implemented a solution which would download client certificates into the browser, but pretty soon ran up against the fact the most browsers have incredibly poor user interfaces for dealing with certificate expiry, selection and expiry. Implementing a replacement has been on the cards for years, but we'd limped along (using a locally developed kx509 implementation that worked with the Mac OS X keyring, to allow Safari to download credentials, and the new kx509 plugin for NIM developed by Secure Endpoints)

We decided upon Cosign (again from the University of Michigan) as a replacement web authentication system, and others set about building a production system around this for our environment. However, Cosign has the major drawback that it requires users to authenticate! Rather than our existing system, where web authentication occurs transparently (as long as the user uses a supported browser on a managed platform ...), they had to explicitly authenticate to the Cosign portal. Initial investigations looked at using x509 certificates (delivered by the kx509 mechanism) to authenticate users with those certificates to Cosign, and then allow Cosign to authenticate the user to the application. However, we'd always had the problem with kx509 that it wasn't possible to perform certificate delegation, without running a service called 'kct' on the KDCs. We'd always been a little wary of kct's code quality and, in fact, had never deployed it in production. This lack of delegation appeared to rule out kx509-based Cosign for many of the web applications we were interested in building, all of which seemed to benefit from some form of credentials delegation. I'll talk more about those later.

So, despite the fact that, ironically, Cosign had been originally chosen because of its kx509 support, we had to look elsewhere. The NegotiateAuth HTTP authentication mechanism allows browsers to perform Kerberos authentication, and was a promising fit. We control the installation settings of Firefox on all of our managed machines, so we could ensure that NegotiateAuth was enabled for our weblogin servers (one problem with Firefox's NegotiateAuth mechanism is that it's configuration settings aren't exposed in any UI, and are therefore hard to modify). This minimal support would ensure that our local user experience was no worse than that with kx509. So, I spent a few days implementing NegotiateAuth support (the new negotiate directive) in Cosign's login script. This was relatively straightforwards, especially compared to the issues with arranging for transparent fallback that followed.

The fallback issues are, as with most things on the web, down to the differences in browser behaviour and UI. The simplest way to achieve fallback is to present the page to the browser with the required headers, and let the browser render the failure text if it can't perform the authentication. However, the way that browsers react, firstly if they don't support NegotiateAuth, secondly if they're not configured to support NegotiateAuth for that domain, and thirdly if they don't have credentials is highly variable, and often suboptimal. Usability testing fairly rapidly showed that this wouldn't be a viable option across the set of browsers we needed to support for remote users. So, we started looking for a mechanism to allow 'testing' for NegotiateAuth support, without alerting the browser.

The solution we ended up with uses some Javascript, and the XMLHttpRequest method to perform a 'background' test of a NegotiateAuth protected page from the server. If this fetch succeeds, then we redirect the user's "main" login page to a NegotiateAuth protected copy of cosign.cgi, which proceeds to authenticate them based on their Kerberos credentials. This works on all of the browsers we tested (Firefox, Safari, Opera, Konqueror) with the exception of Internet Explorer. When IE is prompted to perform NegotiateAuth, and doesn't have credentials it produces a Basic login dialog box, which it then uses to try NTLM against the server. Our solution to this is to browser sniff in the redirect script, and to not even try NegotiateAuth if the browser is IE. We also disable the check for Safari, as this doesn't support credential delegation which we require later in the authentication process. The (rather clunky, I'm afraid) production version of this script is available from

Needless to say, there are further complications. We have cosign deployed across multiple web servers for resilience, all of which answer to requests for Firstly, different browsers perform Kerberos service name lookups in different ways. Firefox always uses the canonical name of the host (that is, it uses the DNS to resolve the name in the URL, and uses the results of that resolution). Safari always uses the name entered in the URL. This means that our webservers must have keys for both HTTP/, as well as HTTP/theirhostname. Firefox then throws an additional spanner in the works. The DNS lookup is performed twice - once for Kerberos, and once to determine the IP address of the host to connect to. If the names are being allocated in a round-robin fashion, then you will end up using HTTP/host-A as the service principal, whilst connecting to host-B. So, all of our web login servers also have to have each other's keys in their keytabs. This Firefox bug is in as bug #383312. The final problem is the the Apache NegotiateAuth module mod_auth_kerb only supports authenticating against a single, chosen, key from a keytab. In our situation, we want it to use any key from the keytab. I've implemented a simple patch which adds the KrbServiceName Any directive, allowing the use of any key that's in the servers keytab.

This is all now runing as a stable service. I'll talk in a future post about some of the additions we've made to this in order to support Friend or Guest accounts, and more about the need for delegation.

Thursday, 27 September 2007

Key Exchange for OpenSSH 4.7p1

I finally managed to make time this evening to update my GSSAPI key exchange patches to OpenSSH 4.7p1, and release them to the world. There are no functional changes with this update, just removing some code from the patch that's made it into the OpenSSH tree. I hope to be able to get some other pieces out of the patch (the GssapiTrustDNS code, in particular) before the next release.

I'd also hoped to be able to announce a public release of my cascading credentials renewal code, but a colleague has discovered some problems with the server crashing when this code is enabled. The problem only seems to occur with particular versions of the MIT GSSAPI library, but I want to find out exactly what's causing this before making a public release.

Friday, 15 June 2007

Python GSSAPI bindings

So, after a few blind alleys, I finally got the JWChat code working. Unfortunately, what this revealed is that the state of GSSAPI support for Python isn't that great.

Esentially, there are two different sources of GSSAPI-Python bindings:

  • PyGSSAPI (on Sourceforge). This is old, and unmaintained. It's written in SWIG, but the SWIG source won't compile in recent SWIGs, and the provided C source won't work with current Python
  • PyKerberos (part of Apple's CalDav server). This is a simple solution, but only provides access to an interface designed to do Negotiate-Auth. The interface isn't object oriented, nor will it garbage collect properly.

In order to get PunJab doing what I needed, the quickest route seemed to be to add SASL support to the PyKerberos library, so I did so. This solution isn't particularly clean, nor does it interface well with situations where you're trying to do anything other than perform a SASL handshake using credentials acquired in a previous NegotiateAuth transaction.

Other local projects required a way to do normal GSSAPI SASL from Python, and I really wanted to tidy up the PunJab code,so I ended up breaking and implementing my own Python bindings. Whilst not yet complete, these currently provide enough functionality to implement a GSSAPI SASL layer for the Twisted Jabber library, which solves our immediate local issue.

Once I've finished documenting the library, I'll package it up and announce it more widely.

Tuesday, 22 May 2007

Adding SSO to JWChat

We're in the process of deploying a new Jabber server here, and have already got the server (jabberd2) and assorted clients (Gaim, Psi, AdiumX, Cocinella) supporting Kerberos based single signon. In my idle moments, though, I've been playing with JWChat - which doesn't support any of the WebSSO technologies, instead requiring a username and password. This limitation isn't really JWChat's fault - instead, it's a product of the way that it must be implemented. JWChat is a complete, Javascript based, Jabber client which runs in the browser - which talks XMPP encapsulated in HTTP (using either HTTP-Binding, or HTTP-Polling) via a proxy, back to your Jabber server. The fact that it's implemented in the browser means that it doesn't have access to anything useful, like a password store, let alone a Kerberos credentials cache. The fact that the proxy just passes XMPP packets blindly to the server, means that you can't use authentication to the proxy as a way of securely authenticating to the Jabber server.

So, here's a cunning plan. There's already a Man-In-The-Middle (the proxy). This is already slightly active at moments during the session. My plan is to make this proxy a little more active when connection establishment is being performed. In particular ...
*) Use the EXTERNAL SASL mechanism to indicate that the authentication is happening over an external channel.
*) The proxy runs at a URL which is optionally protected by some form of proxiable authentication (the one I'm interested in here is Kerberos, but other options are possible).
*) If the user has successfully authenticated to the proxy, then it looks out for the stream:features packet at the start of the XMPP handshake. It intercepts this packet, and adds EXTERNAL to the _start_ of the list of supported SASL mechanisms.
*) The client then picks which authentication mechanism to use. If it is a hacked version of JWChat, it will try EXTERNAL
*) The proxy looks out for an attempt at doing EXTERNAL. If there is one, it doesn't forward this packet to the server. Instead, it starts its own GSSAPI based authentication, using the current user's credentials. It talks directly to the XMPP server (without returning any packets to the client) until the GSSAPI handshake is complete. It then fakes success or failure to the client as coming from the EXTERNAL mechanism.

This should all work, with a few caveats. We can't establish security layers (unless the proxy becomes really clever). The proxy needs to know about a Kerberised authentication mechanism (in our case, probably NegotiateAuth), and about how to do the SASL GSSAPI mechanism. The proxy needs to decode, and encode, XMPP packets by itself.

I'm going to have a go at implementing this using Punjab as the proxy (as Python is slightly less unfamiliar than Java at the moment, and there is at least a free SPNEGO / NegotiateAuth plugin for Twisted available in the Apple CalDav source)

Sunday, 13 May 2007

AFS & Kerberos Workshop

Sitting in Atlanta waiting for a delayed flight on my way back from the AFS & Kerberos Best Practices Workshop at SLAC.

The conference was a little lower-key this year, as accounting restrictions presented commercial sponsorship & the provision of food at meal breaks. Whilst this perhaps reduced the amount of mingling that took place, it didn't really reduce the quality of the talks - which were another strong mixture of case-studies and development reports. Another great effort from Derrick, Jeff, Russ,Tracy, Alf (and the sadly missing Moose).

The talks from the workshop are up on their web pages at Of particular interest to us are Kristen's talk on NDMP, Russ's talk on his pam modules (which I hope we'll be able to migrate to using before our next client platform release), Love's description of rxgk, Matt and Marcus's byte range locking, and Derrick's version of hostafsd.

There was also some talk of testing and build farms, which I hope I'll be able to find a way to help with.

Tuesday, 13 March 2007

GSSAPI Key Exchange for OpenSSH 4.6

Just a quick note that I finally managed to get it together enough to produce a GSSAPI Key exchange patch for the new OpenSSH 4.6p1 release. Patch is available, as always, from

I'm also working on a patch to allow propagation of rekeys over the key exchange handshake. In theory, this means that if you are sitting at a workstation and renew your credentials on that machine all of the machines that you've forwarded tickets to over ssh will also get renewed credentials.

This promises to be really quite funky for people who work like me - with a single, desktop login at home, and many many ssh connections to machines at work. Being able to have all those connections 'magically' end up with valid Kerberos tokens just because I renew my ticket at home will greatly save on typing.

Monday, 26 February 2007

Shifting to Jabberd2.1.1

I'm in the process of migrating our development Jabber service from an FC3 platform, where it's running a heavily patched version of jabberd2.0, to the newly revitalised jabberd2.1. 2.1 already has our local Cyrus SASL patches included, so we can drop those from the patch set, along with a large number of other improvements. In addition, I'm taking the opportunity to improve our support for non-local clients.

In the earlier incarnation, our service would only accept GSSAPI connections - it didn't support any form of password based authentication. It was repeatedly pointed out to me that this was a pain! Clients such as iChat just wouldn't work, Adium, Gaim and Psi all had to be used in a locally patched form, and it was not particularly usable. So, I've suspended my concerns about people caching their Kerberos passwords in their chat clients, and added support for doing password based authentication. This has required some reconfiguration (we now use pam for auth checking, rather than LDAP), and some code changes to jabberd2.1

The PAM authreg module that ships with jabberd2.1 has some strange ideas about what a username looks like - it uses the full JID of the user when calling into the PAM stack (so you get usernames of the form This doesn't work well with a conventional PAM stack, so I've patched the code to disable this behaviour.

I also wanted to be able to restrict password authentication to SSL connections, whilst still providing GSSAPI on insecure connections. Previously, jabberd2.1 didn't support having two different sets of supported SASL mechanisms, so I coded up a quick patch to implement this. It's worth noting that clients such as iChat, which use pre-standardisation authentication mechanisms submit their password despite the server telling them not to. This means that the password will be exposed, regardless of the server setting. Ho hum.

Next step is creating a migration script for the user rosters (as we're moving from => for JIDs)

Sunday, 25 February 2007

Gaim, Kerberos and Cyrus SASL

Some time ago, I added Cyrus SASL support to Gaim, so that it could do Kerberos authentication to Jabber servers. As we've developed our Jabber service locally some issues with this support has emerged.

Firstly, there's been a bug introduced which causes connections to hang if a security layer is negotiated. The fix for this is in the Gaim patch tracker.

Secondly, the code uses the user's domain name as the server name when establishing a SASL connection. This doesn't affect the 'normal' DIGEST-MD5 and PLAIN mechanisms, and also has no effect in situations where the hostname matches the domain of the user. It does, however, cause GSSAPI connections to fail when contact a server whose hostname is different from the user's domain (for example, servers that are located through SRV, rather than A records). Again, there's a fix for this in the Gaim patch tracker.

The final change is a functionality change. When I originally wrote my patch, I changed the Jabber protocol definition to indicate that passwords were optional. Whilst this stopped Gaim from prompting for a password whilst doing GSSAPI authentication, it broke any other mechanism that actually required passwords. That bit of the change was quickly reverted! However, it is useful to not have to enter a password when authenticating using a mechanism that doesn't require it.

It turns out that if you don't register a password calllback with Cyrus SASL, it will not attempt any mechanisms that require passwords. Using this, it's possible to prompt for passwords as required, rather than mandate them for a connection. This allows both GSSAPI usage without a password, with fallback to password prompting for other mechanisms. I've just uploaded the patch for this to the Gaim tracker.

GSSAPI support in Thunderbird

The GSSAPI support in Thunderbird has never returned particularly great error messages. In particular, if the server offers GSSAPI, and nothing else, you'll get told that the server doesn't support secure authentication when login fails.

For some reason, this error message seems to annoy people ...

We can't give 'real' error messages whenever GSSAPI fails, because we try GSSAPI whenever the server offers it, and there are lots of broken Linux installations out there which offer GSSAPI whenever the appropriate libraries are installed, regardless of whether the server has suitable key material or not.

So, it looks like Thunderbird needs to have some UI to say whether GSSAPI is supported or not. Of course, Jeff Altman said as much back in 2005...