Friday, 4 April 2008

GSSAPI Key Exchange for OpenSSH 5.0

It's that time again! There's been another OpenSSH release, and once again, I'm pleased to announce the availability of my GSSAPI Key Exchange patch for it.

Whilst OpenSSH contains support for GSSAPI user authentication, this still relies upon SSH host keys to authenticate the server to the user. For sites with a deployed Kerberos infrastructure this adds an additional, unnecessary, key management burden. GSSAPI key exchange allows the use of security mechanisms such as Kerberos to authenticate the server to the user, removing the need for trusted ssh host keys, and allowing the use of a single security architecture.

This patch adds support for the RFC4462 GSSAPI key exchange mechanisms to OpenSSH, along with adding some additional, generic, GSSAPI features. It implements

  • gss-group1-sha1-*, gss-group14-sha1-* and gss-gex-sha1-* key exchange mechanisms. (#1242)
  • Support for the null host key type (#1242)
  • Support for CCAPI credentials caches on Mac OS X (#1245)
  • Support for better error handling when an authentication exchange fails due to server misconfiguration (#1244)
  • Support for GSSAPI connections to hosts behind a round-robin load balancer (#1008)
  • Support for GSSAPI connections to multi-homed hosts, where each interface has a unique name (#928)

( bug numbers are in brackets)

This release fixes a problem where the GSSAPIStrictAcceptorCheck option was always enabled.

As usual, the code is available from

In addition, with this release I'm pleased to be able to announce an additional patch which implements cascading credential support. This allows credentials provided via key exchange to be cascaded through a set of ssh connections, so that a once a user reauthenticates on their workstation, the new credentials are available on all machines to which they are currently connected. This is controlled via the new options GSSAPIRenewalForcesReKey and GSSAPIStoreCredentialsOnRekey. A pam stack, 'sshd-rekey' may be defined to trigger renewal of additional credentials, such as X509 certificates or AFS tokens, when credentials are renewed on a particular machine. Cascading credential support is implemented using the standard ssh protocol.

The cascading credentials patch is also available from the above website. Whilst it has been extensively tested, it has received less peer-review than the rest of the GSSAPI code. Reports of both success, and failure, would be greatly appreciated! If anyone would like to provide face-to-face feedback, I will be at the AFS & Kerberos Best Practices Workshop in May.

Wednesday, 30 January 2008

HTTP Authentication for Wordpress MU

I've been experimenting recently with deploying Wordpress MU as a blogging solution. As we use cosign for all of our web authentication, we wanted wordpress MU to be able to accept the contents of the REMOTE_USER variable to authenticate users, rather than relying upon Wordpress's internal authentication solution.

Much web searching found a number of people asking similar questions, and the HTTP Authentication plugin for a single user Wordpress install. Unfortunately, this plugin didn't work "out-of-the-box" with Wordpress MU, so I ended up patching it. The modified plugin is available from

It's still tailored to my needs. There's no support for automatic blog creation, for example, although that would be trivial to add. I haven't looked at its integration with Wordpress in much detail yet, either.

To use it, you need to protect your wp-login.php and wp-signup.php files with something like:

<Files wp-login.php>
CosignProtected On
AuthType Cosign
Require valid-user

<Files wp-signup.php>
CosignProtected On
AuthType Cosign
Require group web/blog/create

And your wp-admin directory with:

CosignProtected On
AuthType Cosign
require valid-user
This also checks group membership before permitting blog creation.

To install the plugin, copy the file into your wp-content/mu-plugins directory, and configure using the HTTP Authentication tab in your Site Admin menu.

If you install this, please let me know how you get on!

We've also got an additional patch for wordpress MU which makes it use an HTTPS site for blogs, rather than HTTP - I'm happy to share that on request.

Wednesday, 10 October 2007

OpenID IdP for Cosign

I've been following OpenID's progress for a while - whilst there still aren't any "killer" applications making use of it, it is a very promising example of federated identity for the 'real' world. One of the potentials of running an internal authentication system is to use that to bootstrap an OpenID based on, so that whilst you're logged in to your organisation's system you can make use of an OpenID without requiring any additional authentication steps.

I spent a bit of time yesterday looking at the OpenID servers which are currently available. There isn't a huge amount of freely available server code - the easiest to modify appeared to be JanRain's PHP server, which is built on top of their general purpose PHP OpenID library. This server supports OpenID authentication, along with XRDS (a method for performing attribute exchange with OpenID enabled applications). However, it's designed to use an internal password database.

I've produced some patches to add a number of new features, allowing fallback against an enterprise authentication scheme.

  • When the ENTERPRISE_AUTHENTICATION define is set, if the web server provides a REMOTE_USER variable and the user exists in the local database, authenticate the user.
  • When the ENTERPRISE_AUTHENTICATION define is set, if REMOTE_USER is not set, remove any cached authentication information
  • When the AUTOMATIC_REGISTRATION define is set, and a REMOTE_USER doesn't exist in the local database, add them
  • When the login page is called, but a user has already logged in, just pass them on to the next stored action.

The patch is available from

The problem with this server is that it is all implemented through a single script. It isn't immediately apparent from the script which actions are expected to require authentication, and which are not. So, the scripts existing workflow is preserved. Cosign (our web authentication solution of choice) is configured so that it will provide REMOTE_USER information to the script where that is available, but won't prompt the user where it is not. This means that those portions of the script which should work for unauthenticated users will continue to do so, whilst those which require authentication redirect to the script's
?action=login handler. Secondly, Apache is configured using mod_rewrite so that requests for ?action=login are redirected to a Cosign protected location which always requires authentication. This triggers the usual Cosign authentication process, which eventually redirects back to the script itself. The change to the login page to accept pre-authenticated users then kicks in, and the script continues processing as usual.

The Apache configuration magic that accomplishes all of this is as follows:

Alias /iVouch/ /var/www/openid/src/
Alias /iVouch-login/ /var/www/openid/src/
php_value session.save_path /var/openid-session/

<Location /iVouch-login>
CosignProtected On
CosignGetKerberosTickets On

<Location /iVouch>
CosignProtected On
CosignAllowPublicAccess On

RewriteEngine On
RewriteCond %{QUERY_STRING} ^$
RewriteCond %{LA-U:REMOTE_USER} ^$
RewriteRule ^/iVouch/$ /iVouch-login/ [PT]

RewriteCond %{QUERY_STRING} action=login
RewriteRule ^/iVouch/$ /iVouch-login/ [PT]

RewriteCond %{QUERY_STRING} action=logout
RewriteRule ^/iVouch/$ [R]

This all assumes that the OpenID server is sitting under /iVouch/ on the web server - we'll probably move this to the top level if it ever goes into production. The first set of rewrite rules mean that if you go to the front page of the script you will get logged in. The second set of rules force a login when the scripts login action is performed. The third rule calls the central cosign logout function when the scripts logout action is reached.

Tuesday, 9 October 2007

kx509, kerberos and cosign

One of the things I've been doing over the summer is working on implementing some additions for our web authentication system. I thought I'd take a few moments to discuss these changes, and to describe the way that we're using them.

Historically, we used client certificates for web authentication. Generally speaking, these client certificates were obtained using the University of Michigan's kx509 system run transparently from the PAM stack at user login. When it worked, our users were unaware of a seperate authentication step to use web applications and all was well. However, we were (and are) seeing an increasing demand to provide web applications to clients that aren't under our control. These clients don't have the kx509 utilities installed, and don't have PAM (let alone having fancy things integrated into the stack). We implemented a solution which would download client certificates into the browser, but pretty soon ran up against the fact the most browsers have incredibly poor user interfaces for dealing with certificate expiry, selection and expiry. Implementing a replacement has been on the cards for years, but we'd limped along (using a locally developed kx509 implementation that worked with the Mac OS X keyring, to allow Safari to download credentials, and the new kx509 plugin for NIM developed by Secure Endpoints)

We decided upon Cosign (again from the University of Michigan) as a replacement web authentication system, and others set about building a production system around this for our environment. However, Cosign has the major drawback that it requires users to authenticate! Rather than our existing system, where web authentication occurs transparently (as long as the user uses a supported browser on a managed platform ...), they had to explicitly authenticate to the Cosign portal. Initial investigations looked at using x509 certificates (delivered by the kx509 mechanism) to authenticate users with those certificates to Cosign, and then allow Cosign to authenticate the user to the application. However, we'd always had the problem with kx509 that it wasn't possible to perform certificate delegation, without running a service called 'kct' on the KDCs. We'd always been a little wary of kct's code quality and, in fact, had never deployed it in production. This lack of delegation appeared to rule out kx509-based Cosign for many of the web applications we were interested in building, all of which seemed to benefit from some form of credentials delegation. I'll talk more about those later.

So, despite the fact that, ironically, Cosign had been originally chosen because of its kx509 support, we had to look elsewhere. The NegotiateAuth HTTP authentication mechanism allows browsers to perform Kerberos authentication, and was a promising fit. We control the installation settings of Firefox on all of our managed machines, so we could ensure that NegotiateAuth was enabled for our weblogin servers (one problem with Firefox's NegotiateAuth mechanism is that it's configuration settings aren't exposed in any UI, and are therefore hard to modify). This minimal support would ensure that our local user experience was no worse than that with kx509. So, I spent a few days implementing NegotiateAuth support (the new negotiate directive) in Cosign's login script. This was relatively straightforwards, especially compared to the issues with arranging for transparent fallback that followed.

The fallback issues are, as with most things on the web, down to the differences in browser behaviour and UI. The simplest way to achieve fallback is to present the page to the browser with the required headers, and let the browser render the failure text if it can't perform the authentication. However, the way that browsers react, firstly if they don't support NegotiateAuth, secondly if they're not configured to support NegotiateAuth for that domain, and thirdly if they don't have credentials is highly variable, and often suboptimal. Usability testing fairly rapidly showed that this wouldn't be a viable option across the set of browsers we needed to support for remote users. So, we started looking for a mechanism to allow 'testing' for NegotiateAuth support, without alerting the browser.

The solution we ended up with uses some Javascript, and the XMLHttpRequest method to perform a 'background' test of a NegotiateAuth protected page from the server. If this fetch succeeds, then we redirect the user's "main" login page to a NegotiateAuth protected copy of cosign.cgi, which proceeds to authenticate them based on their Kerberos credentials. This works on all of the browsers we tested (Firefox, Safari, Opera, Konqueror) with the exception of Internet Explorer. When IE is prompted to perform NegotiateAuth, and doesn't have credentials it produces a Basic login dialog box, which it then uses to try NTLM against the server. Our solution to this is to browser sniff in the redirect script, and to not even try NegotiateAuth if the browser is IE. We also disable the check for Safari, as this doesn't support credential delegation which we require later in the authentication process. The (rather clunky, I'm afraid) production version of this script is available from

Needless to say, there are further complications. We have cosign deployed across multiple web servers for resilience, all of which answer to requests for Firstly, different browsers perform Kerberos service name lookups in different ways. Firefox always uses the canonical name of the host (that is, it uses the DNS to resolve the name in the URL, and uses the results of that resolution). Safari always uses the name entered in the URL. This means that our webservers must have keys for both HTTP/, as well as HTTP/theirhostname. Firefox then throws an additional spanner in the works. The DNS lookup is performed twice - once for Kerberos, and once to determine the IP address of the host to connect to. If the names are being allocated in a round-robin fashion, then you will end up using HTTP/host-A as the service principal, whilst connecting to host-B. So, all of our web login servers also have to have each other's keys in their keytabs. This Firefox bug is in as bug #383312. The final problem is the the Apache NegotiateAuth module mod_auth_kerb only supports authenticating against a single, chosen, key from a keytab. In our situation, we want it to use any key from the keytab. I've implemented a simple patch which adds the KrbServiceName Any directive, allowing the use of any key that's in the servers keytab.

This is all now runing as a stable service. I'll talk in a future post about some of the additions we've made to this in order to support Friend or Guest accounts, and more about the need for delegation.

Thursday, 27 September 2007

Key Exchange for OpenSSH 4.7p1

I finally managed to make time this evening to update my GSSAPI key exchange patches to OpenSSH 4.7p1, and release them to the world. There are no functional changes with this update, just removing some code from the patch that's made it into the OpenSSH tree. I hope to be able to get some other pieces out of the patch (the GssapiTrustDNS code, in particular) before the next release.

I'd also hoped to be able to announce a public release of my cascading credentials renewal code, but a colleague has discovered some problems with the server crashing when this code is enabled. The problem only seems to occur with particular versions of the MIT GSSAPI library, but I want to find out exactly what's causing this before making a public release.

Friday, 15 June 2007

Python GSSAPI bindings

So, after a few blind alleys, I finally got the JWChat code working. Unfortunately, what this revealed is that the state of GSSAPI support for Python isn't that great.

Esentially, there are two different sources of GSSAPI-Python bindings:

  • PyGSSAPI (on Sourceforge). This is old, and unmaintained. It's written in SWIG, but the SWIG source won't compile in recent SWIGs, and the provided C source won't work with current Python
  • PyKerberos (part of Apple's CalDav server). This is a simple solution, but only provides access to an interface designed to do Negotiate-Auth. The interface isn't object oriented, nor will it garbage collect properly.

In order to get PunJab doing what I needed, the quickest route seemed to be to add SASL support to the PyKerberos library, so I did so. This solution isn't particularly clean, nor does it interface well with situations where you're trying to do anything other than perform a SASL handshake using credentials acquired in a previous NegotiateAuth transaction.

Other local projects required a way to do normal GSSAPI SASL from Python, and I really wanted to tidy up the PunJab code,so I ended up breaking and implementing my own Python bindings. Whilst not yet complete, these currently provide enough functionality to implement a GSSAPI SASL layer for the Twisted Jabber library, which solves our immediate local issue.

Once I've finished documenting the library, I'll package it up and announce it more widely.

Tuesday, 22 May 2007

Adding SSO to JWChat

We're in the process of deploying a new Jabber server here, and have already got the server (jabberd2) and assorted clients (Gaim, Psi, AdiumX, Cocinella) supporting Kerberos based single signon. In my idle moments, though, I've been playing with JWChat - which doesn't support any of the WebSSO technologies, instead requiring a username and password. This limitation isn't really JWChat's fault - instead, it's a product of the way that it must be implemented. JWChat is a complete, Javascript based, Jabber client which runs in the browser - which talks XMPP encapsulated in HTTP (using either HTTP-Binding, or HTTP-Polling) via a proxy, back to your Jabber server. The fact that it's implemented in the browser means that it doesn't have access to anything useful, like a password store, let alone a Kerberos credentials cache. The fact that the proxy just passes XMPP packets blindly to the server, means that you can't use authentication to the proxy as a way of securely authenticating to the Jabber server.

So, here's a cunning plan. There's already a Man-In-The-Middle (the proxy). This is already slightly active at moments during the session. My plan is to make this proxy a little more active when connection establishment is being performed. In particular ...
*) Use the EXTERNAL SASL mechanism to indicate that the authentication is happening over an external channel.
*) The proxy runs at a URL which is optionally protected by some form of proxiable authentication (the one I'm interested in here is Kerberos, but other options are possible).
*) If the user has successfully authenticated to the proxy, then it looks out for the stream:features packet at the start of the XMPP handshake. It intercepts this packet, and adds EXTERNAL to the _start_ of the list of supported SASL mechanisms.
*) The client then picks which authentication mechanism to use. If it is a hacked version of JWChat, it will try EXTERNAL
*) The proxy looks out for an attempt at doing EXTERNAL. If there is one, it doesn't forward this packet to the server. Instead, it starts its own GSSAPI based authentication, using the current user's credentials. It talks directly to the XMPP server (without returning any packets to the client) until the GSSAPI handshake is complete. It then fakes success or failure to the client as coming from the EXTERNAL mechanism.

This should all work, with a few caveats. We can't establish security layers (unless the proxy becomes really clever). The proxy needs to know about a Kerberised authentication mechanism (in our case, probably NegotiateAuth), and about how to do the SASL GSSAPI mechanism. The proxy needs to decode, and encode, XMPP packets by itself.

I'm going to have a go at implementing this using Punjab as the proxy (as Python is slightly less unfamiliar than Java at the moment, and there is at least a free SPNEGO / NegotiateAuth plugin for Twisted available in the Apple CalDav source)