Wednesday 23 November 2011

OAuth MyProxy Mash up

This is a revisit of OAuth from two perspectives, the first in that it was under consideration for the MashMyData project and the second, because this is looking at using OAuth with MyProxyCA, something already exploited with the CILogon project.

MashMyData explored solutions for delegation in a secured workflow involving a portal, an OGC Web Processing Service and OPeNDAP.   The aim was where possible, to re-use existing components a large part of which was the federated access control infrastructure developed for the Earth System Grid Federation.   ESGF supports single sign-on with PKI short-lived credentials but we had no use case requiring user delegation.  When it came to MashMyData the traditional solution with Proxy certificates won out.   So why bother revisiting OAuth?
  • OAuth 2.0 simplifies the protocol flow compared with OAuth 1.0.  Where OAuth 1.0 used signed artifacts in the HTTP headers, with OAuth 2.0, message confidentiality and integrity can be taken care of with SSL.  
  • Custom SSL verification code is required for consumers to correctly verify certificate chains involving proxy certificates.  We can dispense with this as OAuth uses an independent mechanism for delegation.
There are a number of different ways you could apply OAuth.  Here we're looking at MyProxyCA, a short-lived credential service (SLCS).  Usually, a user would make a request to the service for a new credential (X.509 certificate) and authenticate with username/password.  Applying OAuth, the user gives permission to another entity to retrieve a credential representing them on their behalf.

There's a progression to this solution as follows:
  1. MyProxyCA alters the standard MyProxy to enable certificates to be dynamically issued from a CA configured with the service.  User's input credentials (username/password) are authenticated against a PAM or SASL plugin.  This could link to a user database or some other mechanism.  MyProxyCA issues EECs.
  2. Front MyProxyCA with a web service interface and you gain all the benefits  - tools, middleware, support and widespread use of HTTP.
  3. If you have a HTTP interface you can easily front it with OAuth
2) Has been developed for CILogon and there is also MyProxyWebService.  This is how 3) could look:

The user agent could be a browser or command line client or some other HTTP rich client application.  The OAuth 2.0 Authorisation Server is implemented in this case as middleware fronting the online CA.   The user must authenticate with this so that the online CA can ensure that this user has the right to delegate to the client.  I would envisage it also keeping a record of previously approved clients for the user so that approvals can be made without user intervention should they wish.  I like this ability for the user to manage which clients may be delegated to offline of the process of brokering credentials.

Once the client has obtained an authorisation grant, it generates a key pair and puts the public key in a certificate request to despatch to the online CA.  The latter verifies the authorisation grant and returns an OAuth access token in the form of a new user certificate delegated to the client.  The client can then use this in requests to other services where they need to act on behalf of the user.  A big thank you to Jim Basney for his feedback in iterating towards this solution.

There is the question remaining of the provenance of credentials: how do I distinguish this delegated certificate to one obtain directly by the user?  With proxy certificates you at least have the chain of trust going back to the original user certificate issuer and the CN component of the certificate subject name added for each delegation step.  For ESGF, we added in a SAML attribute assertion into the certificates extension for issued EECs.  There's a profile for this with VOMS which we hope to adopt for Contrail and the same idea has been used elsewhere.   I'd like to be able to express provenance information in this same space.  I would think it would likewise be as an attribute assertion but it needs some more thought.  The certificate extension content could be extended with additional SAML to meet a couple of other objectives:
  • add an authorisation decision statement to restrict the scope of resources that the issued certificate is allowed to access
  • add level of assurance information via a SAML Authentication Context.
 

Thursday 20 October 2011

Federated Clouds

Imagine the ability to seamlessly manage independent resources on multiple cloud providers with a single interface.  There are some immediate benefits to consider: avoiding vendor lock-in, migration of a resource from one cloud to another, replication of data ...

You might be excused for thinking it's a little ambitious but a colleague on the Contrail project drew my attention to this article on Cloud Brokering.  As Lorenzo said, you don't have to pay for the full article to get the jist but it seems from a rudimentary search that there are a number of commercial products already ventured into this area:

http://www.datacenterknowledge.com/archives/2009/07/27/cloud-brokers-the-next-big-opportunity/

Federating clouds is a core objective of Contrail, and from what I heard at the Internet of Services meeting I attended last month, there's plenty of research interest in this topic.  Picking out some points raised in the discussions (with some of my own thoughts mixed in):
  • the importance of federating clouds for the European model.  Cloud infrastructures deployed in smaller member states can't match the resources available to the large US enterprises but if those smaller infrastructures are joined in a federation their resources can be pooled to make something greater.
  • Standards are essential for federated clouds to succeed (an obvious point really) but that existing standards such as OVF and OCCI provide incomplete coverage of what is needed across the spectrum of cloud architecture related concerns.
  • The problem of funding and continuity of work, general to many research efforts but cloud technology by its nature surely needs a long term strategy for it to flourish. 
  • The need for longer term research goals with 10-15 year gestation, short-term goals will be absorbed by commercial companies.  There's a danger of simply following rather than leading.
So on the last point then, it's all right to be ambitious ;)


Friday 7 October 2011

Federated Identity Workshop Coming up...

This is a plug for a workshop on Federated Identity Management for Scientific Collaborations coming at RAL 2-3 November:

http://indico.cern.ch/conferenceDisplay.py?ovw=True&confId=157486

It's a follows the first of it's kind held earlier this year held at CERN and brought together experts in the field and representatives from a range of different scientific communities to present the state of play for federated identity management in each of the various fields and draw together a roadmap for future development.   See the minutes for a full account.

Picking out just a few themes that were of interest to me: inter-federation trust came up a number of times and the need for services to translate credentials from one domain to another.  I read that as a healthy sign that a) various federated identity management systems have bed down and become established and b), that there is not a fight of competing security technologies for one to take over all, rather a facing up to realities of how can we make it work so that these co-exist along side each other.

Credential translation brings in another two interesting issues: provenance and levels of assurance that actually also arose independently in some of the talks and discussions.  If I have a credential that is as a result of a translation of another credential from a different domain how much information is transferred between the two, is it lossy are the identity concepts and various attributes semantically the same?   The same issues arise perhaps to a lesser degree with delegation technologies. 

Levels of assurance is another issue that is surely going to crop up more and more as different authentication mechanisms are mixed together in systems: the same user can enter a federated system with different methods how do we ensure that they are assigned access rights accordingly.   Some complicated issues to tackle but the fact that they can begin to be addressed shows the progress that has been made building on the foundations of established federated systems.


Friday 19 August 2011

Java SSL Whitelisting of Peer Certificate Subject Names

Helping a colleague just today has reminded me to finish this post drafted a long time ago. Last year I was dipping into the Java SSL libraries to write a short piece of code to make a call to a service running over HTTPS where mutual authentication is required. - The client authenticates the server based on the server's X.509 certificate passed in the SSL handshake but in addition, the client must pass a certificate to enable the server to authenticate it the client.

By default, the Java SSL trust manager will trust peer certificates provided that they are issued by any of the CAs (Certificate Authorities) whose certificates appear in the default trust store for the JVM.   It's possible to customise the trust manager to use a given trust store to give more fine grained control but what if we want to trust only a certain subset of certificates issued by a given CA or CAs?  

One way to achieve this is to whitelist based on the peer certificate DN or Distinguished Name.   This is something that is straightforward to do on the server side with, for example, Apache using the SSLRequire directive. It's also a practice used in Grid computing authorisation middleware with ACLs (Access Control Lists).  Rather than the protection of some server-side resource, the problem to solve in this case is a client invocation.

Returning to the SSL API then, this can be achieved by implementing javax.net.ssl.X509TrustManager interface. The key method for client side checking of server certificates is checkServerTrusted. The relevant hooks can be set in here to check the peer certificate against a whitelist:

    public void checkServerTrusted(X509Certificate[] chain, String authType)
        throws CertificateException {
     
        // Default trust manager may throw a certificate exception
        pkixTrustManager.checkServerTrusted(chain, authType);
        
        // If chain is OK following previous check, then execute whitelisting 
        // of DN
        X500Principal peerCertDN = null;
  
        if (certificateDnWhiteList == null || 
            certificateDnWhiteList.isEmpty())
            return;
  
        int basicConstraints = -1;
        for (X509Certificate cert : chain) {
            // Check for CA certificate first - ignore if this is the case
            basicConstraints = cert.getBasicConstraints();
            if (basicConstraints > -1)
                continue;

            peerCertDN = cert.getSubjectX500Principal();
            for (X500Principal dn : certificateDnWhiteList)
                if (peerCertDN.getName().equals(dn.getName()))
                    return;

            throw new CertificateException("No match for peer certificate \"" + 
  peerCertDN + "\" against Certificate DN whitelist");
        }
    }

pkixTrustManager is the default trust manager whilst certificateDnWhiteList is a list of accepted DNs as X500Principal types.  These can be initialised in the classes' constructor from a properties file or some other input.  The pkixTrustManager.checkServerTrusted call applies the default verification of the peer's certificate based on the CA certificates present in the client's trust store. If this succeeds, a loop then iterates over the certificate chain returned by the peer skipping any CA certificates*.  Once the peer certificate is found, its DN is extracted and checked against the whitelist. If matched, it returns silently to the caller indicating all is OK. If no match is found, a CertificateException is thrown to indicate that the peer certificate is not in the accepted list of DNs.  This could easily be extended to do more sophisticated matching for example using regular expressions.

This technical article provides some more background (scroll down a long way to the Trust Manager heading).  The full source for the example above is available here.

[* The peer can of course pass back not only its own certificate, but any intermediate CA certificates needed to complete the chain of trust to a root CA certificate held by the client.]

Thursday 10 February 2011

Proxy certificates and delegation with netCDF Beta release

I've written previously about the extensions to the netCDF C API to enable SSL client based authentication.  It's been great to see how something slotted in at the base of a software stack filters down to benefit all the dependents: colleagues have been testing Ferret, ncview and Python bindings built against the updated libraries and used to query ESG-secured OPeNDAP services.  This links with another thread, the MashMyData project extends this SSL client based authentication mechanism from EECs (End Entity Certificates) - the current currency for short lived credentials in ESGF (Earth System Grid Federation) - to RFC3820 proxy certificates.   This is necessitated by the need for delegation in chain of operations in our use case: a chain linking a portal to an OGC Web Processing Service which itself calls an OPeNDAP service.  So, on to trying out the netCDF C client with a proxy certificate:
  • Get ESG-enabled netCDF - now in 4.1.2 beta2
  • build simple client against this version of the library
  • Get  EEC and delegate (need Globus Toolkit for this example)
So expanding the last step:

1) Get short lived EEC from home MyProxy server:

$ myproxy-logon -s <my idp's myproxy host address> -o user.pem -b

2) Delegate to obtain proxy certificate:

$ grid-proxy-init -cert user.pem -key user.pem -out ./credentials.pem -rfc

3) Update netCDF configuration to pick up credentials:

CURL.VERBOSE=1
CURL.COOKIEJAR=.dods_cookies
CURL.SSL.VALIDATE=1
CURL.SSL.CERTIFICATE=<path>/credentials.pem
CURL.SSL.KEY=<path>/credentials.pem
CURL.SSL.CAPATH=<home path>.globus/certificates

Calling the netCDF client makes the underlying Curl library invocation and correctly passes the certificate chain comprising proxy certificate and EEC that issued it (grid-proxy-init step).  The OPeNDAP server and associated security middleware is correctly configured to accept proxy certificates.  I get my data back :).

Monday 24 January 2011

It's nice when it just works

Last week we deployed the full access control infrastructure with our TDS (THREDDS Data Server) part of the Data Node component we are hosting at the BADC as part of the Earth System Grid Federation (ESGF).  What's been pleasing is that we have been able to mix independent implementations together and yet combine them easily in a working system.

The ESGF in terms of software implementation is predominantly Java based but within the context of access control there is a parallel Python based 'NDG Security' implementation here.     We now have TDS deployed too but hooked up to the same system.   This follow-ups from a previous post on the authorisation infrastructure for ESG where I showed PyDAP, a Python implementation of OPeNDAP deployed with our authorisation system.  TDS is of course Java based and we run it within Tomcat fronted with a servlet based authorisation filter.   The common interface to the authorisation system is via a SAML web service callout from the filter to an Authorisation Service.   ESGF has a Java based Authorisation Service implementation but here we've deployed with a Python based one from NDG Security which shares the same interface.  Plugging in the TDS to this was simply a question of making the connection settings and adding the additional rules needed in the XACML policy.

So below, a user's NetCDF (could equally be a browser) client can talk to two apps PyDAP and TDS to make OPeNDAP queries.  PyDAP is deployed with mod_wsgi / Apache.     Each service is fronted by an authorisation filter (In practice, authentication filters too but omitted here for simplicity).   The respective filters intercept requests and query the authorisation service to make an access control decision.   The Authorisation Service is itself a Python app is also running under mod_wsgi/Apache.

Within the Authorisation Service, a context handler translates the incoming SAML decision request query to XACML (yes, XACML could have been used instead between the filters and Authorisation Service) and feeds the request to the Policy Decision Point.  The PDP has a XACML policy fed to it at start-up.  When making an access decision, it can also query for additional attributes by requesting the context handler query the Policy Information Point.  The PIP can query for federation wide attributes from an Attribute Service at PCMDI.  PCMDI have a key role administering access in the federation.  The PDP makes its decision and a response is sent via the Context handler back to the filter fronting the respective app.