This is the third and final post of a series examining how authentication – in particular, federated identity and standards-based single sign-on (SSO) – and attribute based access control (ABAC) interrelate, and can interoperate in support of some interesting use cases. In case you didn’t read the earlier posts yet, or if you just want a quick refresh, Part I is here, and Part II is here.
In this final post, I’ll be looking in more detail at how ABAC, identity federation and single sign-on can be used together. As before, I’m really focused on these as building blocks – the relevant specification for the key protocols such as XACML, SAML, OAuth and OpenID Connect are well covered elsewhere. The aim here is to look at how these standards can be combined to solve real business problems, and thus help provide the best of both worlds in terms of security and improved user experiences.
Let’s start by talking about what we mean by ‘security’ and ‘user’. I’m sure you have your own favorite definition of these, so I want to be clear about what I mean in this particular article.
Security: ensuring and evidencing that a user only performs those operations which are permitted.
This is a deliberately broad definition: the ‘operation’ might be as simple and (probably) low-risk as ‘retrieve the intranet home page’ or as complex and (potentially) high-risk as ‘purchase ten million US Dollars worth of British Sterling’.
User: any entity involved with or affected by the operation
Again, deliberately broad, and designed to take into account the end-user, the developer, the device the transaction runs on, the application server, the compliance officer…
By inference, then, the ‘user experience’ is equally a big set of things, because we really are trying to keep all the people happy, all the time. OK… I’ll settle for ‘most of the people, most of the time’; or maybe ‘not make anyone unhappy’!
With that in mind, let’s bring this back to the identity security arena, and in the context of a scenario that is fictitious, but based on several actual use-cases, look at how standards-based authentication and authorization can help do this.
Here’s the scenario. A large insurance company – InsCo – provides life and critical illness policies. They have retail policies, which they sell to individual consumers direct and via third-party brokers; and they have corporate policies which they sell direct to large companies to cover their employees. In the latter case, they also need to manage these policies on an ongoing basis, to account (in the main) for employees starting and leaving the company.
From a business perspective, InsCo wants to be as easy as possible to do business with; to keep the costs of their operations as low as reasonably possible without compromising customer service; and to keep risk as low as possible.
I’m not going to worry too much about the direct-to-consumer business here – although that might form a topic for some exploration in the future! – but let’s dig into the business-to-business relationship InsCo has with their insurance brokers, and their corporate clients.
In both of these cases, it used to be that InsCo had to manage a lot of transient external identities. Why? Because individual brokers and the HR staff at corporate clients needed to log in to the InsCo Broker Portal to manage policies; and, for the corporate clients, InsCo also needed to handle changes to staff lists as employees left or joined.
There are some interesting risks associated with this, beyond the simple fact that InsCo has to manage external identities unnecessarily. For instance: if a mortgage broker leaves one company and goes to another, they might still be able to log in and see previous policies. Bad.
So InsCo went ahead and implement standards-based SSO, initially using SAML; as we saw in Part I, this helps a lot with security because it shifts most of the management of the external users back where it belongs. It might also make them more competitive (it’s easier for a broker to ‘log in’ to the InsCo portal, because they don’t have to remember a password, so they are more likely to sell an InsCo policy). It also lays the foundation for some other really interesting things.
InsCo can now extend a service to individuals within their corporate customers such that individuals can check details and even raise initial claims against their policies. To do this, all that’s required is for the corporate client to send an assertion with the details of the user and the fact that they are only authorized to view their own policy. Now – this could be done a couple of ways. First up, as we saw in Part II this decision could be made at the InsCo side. Most likely, InsCo would base the decision on a number of factors, including a ‘role’ attribute (‘employee’) sent by their customer; but also on, for instance, the existence of a valid and current insurance policy against that name, authorized by the corporation. It looks something like this:
But another option could be to allow the client itself to make some, or all, of that decision. It would be perfectly possible to build a flow such that the client acted as a PIP. You can do this several ways; the easiest (in my view) is to do this in SAML at the point of authentication. You can of course build in the relevant attributes as part of the attribute contract… but if try to include the total set of all possible attributes that your application estate might require, you’ll end up with a pretty big attribute contract to start with; slow down your deployment process a lot (it can be a huge task to figure out all the attributes you might need); and give yourself a headache later on down the line when the required attributes for Authorization change, and you have to go back and reconfigure all your attribute contracts!
So, an alternative could be to use the Attribute Query profile of SAML 2.0. Instead of having to include everything up front in the initial assertion, the Attribute Query profile allows you to request additional attributes later in your processing cycle: just what is needed, just in time. The flow looks like this (in this case, the Client is the IdP and the ‘attribute authority’, and InsCo is the SP and the Attribute Requester):
“But wait!” I hear you say. “Where’s my XACML PDP?” Let’s add that in:
It’s fair to say that the Attribute Query profile is not widely deployed, so if you want to go this route, you’ll need to make sure that all parties are able to support the model.
An alternative approach would be to send a XACML Attribute Request direct to the IdP:
Same end result, fewer steps, but with some interesting implications:
- The IdP now needs both a SAML infrastructure and a XACML infrastructure
- You need to have 2 separate ways of establishing trust: one for your SAML request, and one for your XACML flow… and, interestingly, the XACML dSig Spec recommends that you do this via the existing SAML constructs.
You’ll need to make a decision project by project – and potentially even connection by connection – which of these architectures is the best fit.
There’s one related point here which is worth digging into briefly: how do we provision users from the clients? Today, we commonly see three approaches:
- Manual Provisioning, also known as ‘send me a spreadsheet’. Easy to implement for the Identity Provider, but time-consuming and costly for the Service Provider, and prone to error.
- ‘Just-in-time’ provisioning based on attributes in the SAML assertion. Zero-burden for the Service Provider, but requires the Identity Provider to deploy a solution that actually supports this, may not support record updates, and typically doesn’t handle deprovisioning or disabling of account (in other words, not CRUD, just CR or maybe CRU)
- Proprietary provisioning API. Overhead on both IdP and SP for this (since both have to deploy and maintain over time), and potentially open to poor security design and/or implementation. This also doesn’t scale well for the IdPs – they may end up having to support many different protocols for different SPs.
The reason this mess exists is that there haven’t been well-adopted standards for cross-domain user provisioning. That’s now changing, thanks to the System for Cross-Domain Identity Management (SCIM).
SCIM defines a simple, RESTful API for full create, read, update & delete (CRUD) operations for user provisioning. There are several products and toolkits emerging, and some commercial identity federation solutions have (or have announced) support for SCIM in their products. So now you can have this, with the SAML Tokens and the SCIM calls all flowing over HTTPS:
You’ll notice that both my SCIM server and my Federation Gateway are connected with my PDP. This isn’t required, of course, but it’s certainly reasonable — and there are real-world production deployments doing this — to have your PDP make decisions about whether a given user provisioning/deprovisioning action is permitted or not.
For the sake of completeness, I’ll also note that SCIM is generally thought of as a ‘push’ protocol: that is, the provider of the identity (somewhat confusingly called the ‘Service Provider’ in the SCIM schema) makes calls out to the consumer of the identity (called the ‘Client’ in SCIM) as necessary to push changes up to the Client. However, the spec also allows for the Client to make calls back to the Service Provider, to ‘pull’ updates up. SCIM is still not widely deployed, but as adoption increases, this might provide a third way to call for additional attributes in real-time.
So now we have a basic, general-purpose auth’n/auth’z architecture at least for browser-based applications. But what about API calls?
Excluding internal calls, the most common examples of API calls are either web service requests from other server-based applications; and native mobile applications. Thick client desktop application requests are a decreasing trend; with ‘other devices’ (i.e. internet-connected ‘things’ excluding smartphones) increasingly rapidly.
InsCo have opened up some of their applications to accept inbound RestFul API calls, mainly for smartphone access, but with a view to other potential uses in time.
As we saw in Part II, OAuth and OpenID Connect are key protocols for this, so InsCo is going to need to add some capability to their deployment. As long as we started out with our nice general-purpose architecture, though, this is ‘embrace and extend’ rather than ‘rip and replace’.
Effectively, I now have an ‘Identity Gateway’ as a layer between me and the outside world to move identity information around in a scaleable, secure way; and I’m using a standardised mechanism internally to make access control decisions for the whole construct.
Let’s look in a little more detail at the data flow here for the native mobile app. Here’s what we need to do:
- The native application itself needs to be ‘authorized’ (in the OAuth sense) to make a request to the API on behalf of the user.
- In order for the OAuth server to issue a token to the native application, we need:
- The user to be authenticated
- The user to be permitted (that is ‘authorized’ in the XACML sense) to use the native app
- In order for the user to actually view their own policy details, we need that specific API request to be permitted, given attributes available at runtime to the PDP, including (but not necessarily limited to):
- The existence of a valid OAuth token, or identity asserted on that basis by the OAuth AZ server
- The identity of the user (hopefully provided by information carried in the OpenID Connect statement associated with the OAuth token – but could be retrieved other ways as well)
- Other contextual information such as geolocation, current employment status (potentially retrieved as an attribute query back to the client) and so on
Although this looks a little complicated, it’s actually just building up a sequence of flows based on architecture InsCo already has deployed… and, of course, if regulations or business requirements change, it’s relatively simple to drop those changes into one place and then have them affect the entire identity chain in one go, rather than having to go and patch things up application by application.
As an aside: if you’re interested in experimenting with OpenID Connect, but not yet ready to look at a commercial solution, you could take a look at the open-source OpenID Connect Apache Module, released by Ping Identity on GitHub.
One last thing to address. We didn’t, yet, talk about the power of being able to transport XACML statements themselves across organizational boundaries. So let’s look at a reason we might want to be able to do that.
Let’s say that InsCo now outsources storage of historical employee policies: these are policies that apply to employees who have retired, but which are still valid and in force. These don’t get accessed very frequently, so in order to reduce operational costs, InsCo work with a separate company – PolicyKeeper – who maintains the information.
Now, let’s imagine a scenario where an HR manager – Alice – at a Client needs to access the historical information. First, Alice accesses the InsCo application, using the identity flow we described earlier. She makes a request to view a historical policy, and the InsCO PDP – based on the relevant policy input – decides this request is permitted. It’s important to note that this decision is, rightly, made by InsCo. The decision needs to be passed on to PolicyKeeper, but for audit purposes, PolicyKeeper also needs to keep track of the identity of the original requestor. So InsCo issues a SAML statement, to go along with the request for the policy. That SAML statement can contain the XACML ‘permit’ decision. And PolicyKeeper can now release the policy with an audit trail all the way back showing who requested what, and who granted permission.
So there we are. I hope this has given some insight into how you can combine these cross-domain identity protocols to solve real business problems; and by doing it the right way – using the right protocol for the right problem – you can improve the experience for all the users touched by an application.