Earlier this year, I had to integrate an application with an identity provider. Both claim to be compliant with Open ID Connect. But when they don’t get along, I must find out where it breaks to determine which party isn’t compliant. Therefore, I had to really get to the transaction-level details. As a result, I was eager to find out a way to test Open ID Connect Flows locally. Considering that I discussed OIDC in several past articles and use cases (authenticating to kube-api server, Kubernetes workload, ROSA), I feel it is important to be able to test Open ID Connect flows locally.
Architecture
What makes local testing difficult is that there are several parties involved and they act in different roles. Take Authorization Code Flow as an example, the main actors are:
- User Agent: the browser;
- Authorization Server: an identity store, also referred to as identity provider;
- Resource server: an HTTP server that returns protected resource;
- OIDC client application: the component that communicates with identity store on behalf of the resource server;
Apart from the bowser (obviously running on my laptop), we can group the the other actors in different patterns. Here I illustrate some options below:
In essence, we need an OIDC client app and an authorization server. And I find the most simplistic test architecture to be pattern #2 (without reverse proxy) on the client side, and pattern #1 on the server side. In the next section let’s discuss why I prefer this test architecture.
Choice of Tools
The principle of this lab is that we can focus on the OIDC flows itself and simplify every other aspects as much as we can.
The authorization server needs to access my client app. As a result, if I use a public authorization server, such as Azure or Google, I’d have to host my client app on a public IP with domain name as well. This creates churns. To see how much hassle this can involve, review my article Istio External Authentication lab. Instead, I need a tool to host the authorization server on my laptop. And yes, it’s KeyCloak. It is a well-renewed open-source project for identity and access management. It is also the upstream project of RedHat SSO. Another reason is it operates on PostgreSQL database which is a common relational database technology with release in Docker images.
On the client side, we can have our web service with built-in OIDC capability, which usually requires development work in a language with OIDC library. Alternatively, if the client app lacks such capability, we would address this capability in a different component on the client side. It can either be a standalone, purpose-built proxy, or a generic reverse proxy with OIDC capability. For this, I examined a few options. Nginx and Traefik have OIDC support for a fee in their Enterprise product. Apache has a module mod_auth_oidc for free but it requires building the plug-in on my own for Mac platform. Eventually I landed on the purpose-built option, using the oauth2-proxy open source project. This project can act as both actor #3 (with a minimal HTTP server) and actor #4 (OIDC client). So it also saves me from hosting a separate web server. Also, I have used it in the past in the Istio External Authentication lab.
In reality, each message in the OIDC flows must be TLS encrypted. In local testing though we don’t really care. Similarly we don’t necessary need a reverse proxy if it plays no role in the OIDC flow. Both Keycloak (with PostgreSQL) and oauth2-proxy are released in Docker images. As a result, we are ready to roll with only three tools: the browser on local host, Keycloak and oauth2-proxy in the Docker daemon, which provides great portability.
To get started, let’s have two fictitious domains, web.digihunch.com
, and keycloak.digihunch.com
. In the Docker compose manifest, I name the services based on their hostname so that they can reference each other from within the container network namespace. To access the service from the host, I force the DNS resolution to localhost in /etc/hosts
on my Mac, and make sure to declare the same host port in the port mapping (4180 for dummy web service; and 8080 for Keycloak).
Configuration
I created a Docker compose file as below, to set up my test:
services:
web.digihunch.com:
container_name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:latest
command:
- --http-address
- 0.0.0.0:4180
environment:
OAUTH2_PROXY_COOKIE_SECRET: NYZaClZinINKwxNGzEDeFGh64W6tmq1eB6uHQPa4S5o
OAUTH2_PROXY_CLIENT_ID: dh-user-client
OAUTH2_PROXY_CLIENT_SECRET: pmkwBjkVesrj7fw1MY7h5s9e3cmAKXgc
OAUTH2_PROXY_PROVIDER: oidc
OAUTH2_PROXY_OIDC_ISSUER_URL: http://keycloak.digihunch.com:8080/realms/digihunch-users
OAUTH2_PROXY_PASS_ACCESS_TOKEN: true
OAUTH2_PROXY_EMAIL_DOMAINS: '*'
OAUTH2_PROXY_REDIRECT_URL: http://web.digihunch.com:4180/oauth2/callback
OAUTH2_PROXY_PROVIDER_DISPLAY_NAME: DHCKC
OAUTH2_PROXY_COOKIE_CSRF_EXPIRE: '5m'
OAUTH2_PROXY_COOKIE_CSRF_PER_REQUEST: true
OAUTH2_PROXY_COOKIE_SECURE: false # Needed for HTTP connection
#OAUTH2_PROXY_UPSTREAMS: file:///var/www/static/#/home/ # serve page at /home path
OAUTH2_PROXY_UPSTREAMS: static://202
volumes:
- ./config/oauth2-proxy.cfg:/etc/oauth2-proxy.cfg
# - ./config/www:/var/www/static/
ports:
- 4180:4180
networks:
- oidc_network
restart: unless-stopped
depends_on:
- keycloak.digihunch.com
postgres-db:
image: postgres
container_name: postgresdb
restart: always
shm_size: 128mb
ports:
- 5432:5432
networks:
- oidc_network
volumes:
- ./data/pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=master
- POSTGRES_PASSWORD=masterpass
- POSTGRES_DB=keycloak
keycloak.digihunch.com:
image: quay.io/keycloak/keycloak
command: start
environment: # Based on Hostname:v2 https://www.keycloak.org/docs/25.0.0/upgrading/#migrating-to-25-0-0
KC_HOSTNAME: http://keycloak.digihunch.com:8080
#KC_HOSTNAME_ADMIN: For simplicity, no separate management URL or port
KC_HOSTNAME_BACKCHANNEL_DYNAMIC: true
KC_HTTP_ENABLED: true ## Otherwise HTTPS is the enforced by default.
KC_HEALTH_ENABLED: true
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: kcadminpass
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres-db/keycloak
KC_DB_USERNAME: master
KC_DB_PASSWORD: masterpass
ports:
- 8080:8080
networks:
- oidc_network
restart: always
depends_on:
- postgres-db
networks:
oidc_network:
enable_ipv6: false
driver: bridge
For OAuth2-Proxy, the official image for oauth2-proxy is distroless. So if you have to troubleshoot its container file system, you need to access it via an assistant container:
docker run --rm -it --name debugger --privileged --pid container:oauth2-proxy --network container:oauth2-proxy busybox sh
# to see target container's file system, go to: ls -l /proc/1/root/
Alternatively, we can use the one that Bitnami releases but be vary of some nuances.
In the Keycloak part, I use environment variables and had to watch out for the recent changes on hostname v2. We should first start up the Keycloak service. From http://keycloak.digihunch.com:8080, we can login using the credential specified in the environment variables, then we can create a client app:
- Create a new realm (dropdown-> Create realm) with name digihunch-users
- Switch to this realm from the dropdown, create a couple users (e.g. [email protected]), and set password.
- Create a group (e.g. myadmin) and join the user to the group
- Under the same realm, create a client, with OpenID Connect as type, client ID being dh-user-client. turn on client authentication (without Direct access grants)
- Save the client for now and grab the client secret. Note that the OIDC discovery document (http://keycloak.digihunch.com:8080/realms/digihunch-users/.well-known/openid-configuration) should come online.
- Update the docker compose file:
- The value for OAUTH2_PROXY_CLIENT_SECRET is from the client secret;
- The value for OAUTH2_PROXY_OIDC_ISSUER_URL should be http://keycloak.digihunch.com:8080/realms/digihunch-users;
- The value for OAUTH2_PROXY_CLIENT_ID is dh-user-client;
- Restart all services including the web. Log on to keycloak and go back to the client configuration in the realm. Under settings. Put in valid redirect URIs as http://web.digihunch.com:4180/oauth2/callback and save the client.
Now, let’s start an private browser session, and browse to http://web.digihunch.com:4180/. The browser should redirect you to keycloak’s login page. Once log in is successful, it should redirect you to the static response with 202 code as the manifest configured.
Summary
In this post, I laid out the steps to test the login for OIDC authorization code flow locally as a starting point. For a bullet-proof solution, I recommend taking a look at KeyCloak administration guide. For example, we typically disable the master realm for security.
There is a similar test setup on Otka blog but I simplified all the aspects that I regard as distractors. There are many other flows that can be tested. However, some of the testing still requires client pattern #1 if we need to initiates an activity from the client application.