This describes how we can deploy a cloud-native authorization engine facilitated by Attribute Based Access Control (ABAC) as a microservice application in Kubernetes (K8s) environment. We are going to use Google Kubernetes Engine (GKE) but the Kubernetes commands in this blog should also work on any cloud environment either public or on premises because Kubernetes is vendor-agnostic with respect to the underlying infrastructure.

This blog will walk through an example where the Policy Enforcement Point(s) (PEP) are outside the K8s cluster while the cloud-native authorization engine is deployed as a microservice application in the cluster. 

A Sample Deployment

The following diagram shows a sample deployment of cloud-native authorization engine in GKE.

Here in the example Deployment, we want 3 pods of the authorization engine that uses an image from Google Cloud Registry (GCR). We also want to mount the configurations from the ConfigMap as volume in the pods. We will also use Secrets for storing username and password information.

The Policy Enforcement Points (PEP) are outside the K8s Cluster sending authorization requests. The authorization engine receives the authorization requests from the PEPs through a LoadBalancer. We are going to create this sample deployment in the following section.

Creating a Kubernetes Cluster using GKE

Let’s create a project in Google Cloud Platform (GCP) to work on. We then spin up a K8s cluster in it. For the rest of the blog, we will use a project name Authz-Engine-Demo and a project id authz-engine-demo-1. Have Google Cloud SDK installed in your host machine. You also need to enable billing in your GCP project settings in order to use Google APIs. To create a project run command:

$ gcloud projects create authz-engine-demo-1 --name="Authz-Engine-Demo"
$ gcloud config set project authz-engine-demo-1yaml
$ gcloud config set compute/zone europe-west3-a

Feel free to change the compute/zone. Then run the following command to create a Kubernetes cluster.

$ gcloud services enable container.googleapis.com
$ gcloud container clusters create authz-engine-demo --zone europe-west3-a

Typically, GKE creates the cluster in ~2 minutes. To verify run command:

$ kubectl get nodes
$ kubectl get pods -n kube-system

You should expect a three nodes cluster and the pods in kube-system namespace are all running.

Creating a Namespace

First, we will create a Namespace for our deployment. Managing a cluster can be cumbersome especially if you have hundreds of pods in it. In a case where you want to deploy hundreds of authorization-engine pods with different authorization domain configurations, administering them through namespaces might be a good idea. Creating a Namespace can be done by the following command:

$ kubectl create namespace engineering

Here a Namespace called “engineering” is created for the domain, engineering, as an example. Viewing available Namespaces can be done by running the command:

$ kubectl get namespace

 The Namespace,  engineering is on the list together with 3 built-in Kubernetes Namespaces.

Creating a Secret

Since we are going to enable basic authentication later in our authorization engine deployment configuration, we are going to create a Secret. We could put the user information directly in the authorization engine deployment configuration file but a better way of doing it is to manage user’s information from K8s Secrets. Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Putting the username and password in a Secret object is safer and more flexible than putting it directly in our deployment file. Let’s say for example we have a user, authz-user with a password, mysecretpassword. Note that  the authorization engine requires a Sha-256 hash password. The hashed password of mysecretpassword is 94aefb8be78b2b7c344d11d1ba8a79ef087eceb19150881f69460b8772753263.

In order to create a Secret object, username and password must be base64 encoded. Run commands:

$ echo -n 'authz-user' | base64

YXV0aHotdXNlcg==

$ echo -n '94aefb8be78b2b7c344d11d1ba8a79ef087eceb19150881f69460b8772753263' | base64

OTRhZWZiOGJlNzhiMmI3YzM0NGQxMWQxYmE4YTc5ZWYwODdlY2ViMTkxNTA4ODFmNjk0NjBiODc3 Mjc1MzI2Mw==

And then we use the base64 encoded values in authz-user.yaml file to define our Secret object. To create the Secret using authz-user.yaml file, run command:

$ kubectl create -f ./authz-user.yaml -n engineering

A secret called ‘authz-user’ has been created on engineering namespace. We will reference the data keys in the authorization engine pod later as environment variable values for USERNAME and PASSWORD. To view available secrets on engineering namespace, run command:

$ kubectl get secrets -n engineering

Creating a Docker Image

The pod requires an image to run the application. We are going to use the Google Cloud Registry (GCR) as our Docker private registry. In order to create the image, we need a Dockerfile for the authorization engine. The following is the snippet of the Dockerfile.

FROM openjdk:8-alpine

WORKDIR /opt/ads/
COPY access-decision-service-*.jar ads.jar

Here we use the Axiomatics authorization engine called Access Decision Service.

To build an image for GCR, run docker build command with a tag, “gcr.io/<project_id>/<repository_name>:<version>”  where <project_id> is the id of the active GCP project.

$ gcloud auth configure-docker
$ docker build -t gcr.io/authz-engine-demo-1/ads:v1.1 .
$ docker push gcr.io/authz-engine-demo-1/ads:v1.1

Here we build and push an image using the project id “authz-engine-demo-1” and the repository name is ads.

We will use the image later in our K8s Deployment manifest.

Creating a ConfigMap

The authorization engine requires the deployment and authorization domain configuration files. Not to confuse with Kubernetes Deployment, the authorization-engine deployment configuration is also a YAML file used for initializing the engine. It has reference to a server, authorization domain, and license configurations, etc.

The authorization engine deployment file is the deployment.yml. Note how we use environment variables as values for username, password, license, domain config, and the connector ports respectively. We will set the values of these environment variables later in authorization-engine pod when we create the Kubernetes Deployment object.

The authorization domain configuration is the engineering-domain.xml file. It was exported from Axiomatics Service Manager (ASM). Below is an ALFA snippet of the authorization policy which is used in engineering-domain.xml as main policy configuration. In the example policy below, we want to restrict viewers with age less than 18 to view horror movies.

policy moviePolicy {
         target clause actionId == "view" and movie_genre == "horror"
         apply permitUnlessDeny

         rule denyIfLessThan18 {
             deny
             condition age < 18 || integerBagSize(age) == 0 || integerBagSize(age) > 1 
         }
}

We will use the K8s ConfigMap to store the configurations. This allows us to decouple configuration artifacts from authorization engine Docker image. To create a ConfigMap, run command.

$ kubectl create configmap engineering-config --from-file=deployment.yml --from-file=engineering-domain.xml -n engineering

Here we create a configmap with a name engineering-config in the engineering namespace. We will mount this configmap in our authorization engine pod as a volume later.

To view the configmaps in engineering namespace, run command:

$ kubectl get configmap -n engineering

Creating a Deployment

We will use the file ads-deployment.yaml from the github repo to create a Deployment object. From the ads-deployment.yaml file, we name our Deployment authz-service-engineering. On the spec section, we set replicas to 3 which means 3 pods of authorization-engine application. We use the image gcr.io/authz-engine-demo-1/ads:v1.1 which we created earlier. We set container ports for the application and admin ports of the authorization-engine which are 8990 and 9090 respectively. We set values for the environment variables for initializing the authorization-engine. One thing to note here is the value of the LICENSE environment variable. It is a http URL from where the license is served. In this example, a license server is also a pod running in the engineering namespace. The authorization engine able to reach the license server through K8s service discovery within the same namespace. 

We also specify a readinessprobe to make sure that the authorization engine will only accept authorization requests when ready. We also created a volume for the configmap engineering-config which we name engineering-config-volume. Finally, we mount this volume to the pod of the authorization engine on /config path.

To create the deployment, run the following command:

$ kubectl create -f ads-deployment.yaml -n engineering

To verify the deployment, run:

$ kubectl get deployment -n engineering

To verify if the authorization-engine pods are running, run:

$ kubectl get pods -n engineering

Now we have an authorization-engine microservice application in the cluster.

Creating a LoadBalancer

The authorization engine in the cluster is not accessible from an external network yet. In order for it to be accessible for the PEPs we will create a Service of type LoadBalancer. To create a load balancer, use this file ads-service.yaml and run command:

$ kubectl create -f ads-service.yaml -n engineering

Here we create a service from the deployment authz-service-engineering. GKE will then assign an external IP for the service. Run command to monitor the status of the Service:

$ watch -n1 kubectl get service -n engineering

Wait for the external-IP to be created. It may take a minute. Once the external-IP is set, send the authorization request file to the service using curl or Postman. For curl, run command:

$ curl -i -u authz-user:mysecretpassword -X POST -H 'Content-Type: application/xacml+json' -d@authz_request.json http://<external-ip>:8990/authorize

From the policy we define in engineering-domain.xml, we should expect a permit response since the attribute age in the request is set to 18.

Conclusion

So that’s it. We have deployed an Cloud Native Authorization Engine on Kubernetes Cluster. We also have learned the basic of Kubernetes. On the next part of the blog, we will run the authorization engine along with other microservices using some simple microservice applications that use it.



Leave a Reply

Your email address will not be published. Required fields are marked *