Skip to content

Latest commit

 

History

History
110 lines (70 loc) · 9.23 KB

03-aad.md

File metadata and controls

110 lines (70 loc) · 9.23 KB

Prep for Azure Active Directory Integration

In the prior step, you generated the user-facing TLS certificate; now we'll prepare Azure AD for Kubernetes role-based access control (RBAC). This will ensure you have an Azure AD security group(s) and user(s) assigned for group-based Kubernetes control plane access.

Expected results

Following the steps below you will result in an Azure AD configuration that will be used for Kubernetes control plane (Cluster API) authorization.

Object Purpose
A Cluster Admin Security Group Will be mapped to cluster-admin Kubernetes role.
A Cluster Admin User Represents at least one break-glass cluster admin user.
Cluster Admin Group Membership Association between the Cluster Admin User(s) and the Cluster Admin Security Group.
A Namespace Reader Security Group Represents users that will have read-only access to a specific namespace in the cluster.
Additional Security Groups Optional. A security group (and its memberships) for the other built-in and custom Kubernetes roles you plan on using.

Steps

📖 The Contoso Bicycle Azure AD team requires all admin access to AKS clusters be security-group based. This applies to the new AKS cluster that is being built for Application ID a0008 under the BU0001 business unit. Kubernetes RBAC will be AAD-backed and access granted based on users' AAD group membership(s).

  1. Query and save your Azure subscription's tenant id.

    export TENANTID_AKS_BASELINE=$(az account show --query tenantId -o tsv)
  2. Create/identify the Azure AD security group and "break-glass" cluster administrator user that is going to map to the Kubernetes Cluster Admin role cluster-admin.

    Option 1 - Create a new security group, user, and associate the two

    1. Create the cluster administrator security group.

      export AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE=$(az ad group create --display-name 'cluster-admins-bu0001a000800' --mail-nickname 'cluster-admins-bu0001a000800' --description "Principals in this group are cluster admins in the bu0001a000800 cluster." --query objectId -o tsv)
    2. Create the "break-glass" cluster administrator user.

      📖 The organization knows the value of having a break-glass admin user for their critical infrastructure. The app team requests a cluster admin user and Azure AD Admin team proceeds with the creation of the user in Azure AD.

      TENANTDOMAIN_K8SRBAC=$(az ad signed-in-user show --query 'userPrincipalName' -o tsv | cut -d '@' -f 2 | sed 's/\"//')
      AADOBJECTNAME_USER_CLUSTERADMIN=bu0001a000800-admin
      AADOBJECTID_USER_CLUSTERADMIN=$(az ad user create --display-name=${AADOBJECTNAME_USER_CLUSTERADMIN} --user-principal-name ${AADOBJECTNAME_USER_CLUSTERADMIN}@${TENANTDOMAIN_K8SRBAC} --force-change-password-next-login --password ChangeMebu0001a0008AdminChangeMe --query objectId -o tsv)
    3. Add the cluster administrator user to the cluster administrator security group.

      📖 The recently created break-glass admin user is added to the Kubernetes Cluster Admin group from Azure AD. After this step the Azure AD Admin team will have finished the app team's request.

      az ad group member add -g $AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE --member-id $AADOBJECTID_USER_CLUSTERADMIN
    4. Creat the Azure AD security group that is going to be a namespace reader.

      export AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE=$(az ad group create --display-name 'cluster-ns-a0008-readers-bu0001a000800' --mail-nickname 'cluster-ns-a0008-readers-bu0001a000800' --description "Principals in this group are readers of namespace a0008 in the bu0001a000800 cluster." --query objectId -o tsv)
      echo AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE: $AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE

    Option 2 - Use an existing security group and user

    1. If you already have a security group and cluster administrator user that is appropriate for your cluster's admin service accounts, use those. We will save the security group object ID to use later when we deploy our cluster.

      export AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE=$(az ad group show --group 'cluster-admins-bu0001a000800' --query objectId -o tsv)
      echo AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE: $AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE
    2. Identify the Azure AD security group that is going to be a namespace reader.

      export AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE=$(az ad group show --group 'cluster-ns-a0008-readers-bu0001a000800' --query objectId -o tsv)
      echo AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE: $AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE

Kubernetes RBAC backing store

AKS supports backing Kubernetes with Azure AD in two different modalities. One is direct association between Azure AD and Kubernetes ClusterRoleBindings/RoleBindings in the cluster. This is possible no matter if the Azure AD tenant you wish to use to back your Kubernetes RBAC is the same or different than the Tenant backing your Azure resources. If however the tenant that is backing your Azure resources (Azure RBAC source) is the same tenant you plan on using to back your Kubernetes RBAC, then instead you can add a layer of indirection between Azure AD and your cluster by using Azure RBAC instead of direct cluster RoleBinding manipulation. When performing this walk-through, you may have had no choice but to associate the cluster with another tenant (due to the elevated permissions necessary in Azure AD to manage groups and users); but when you take this to production be sure you're using Azure RBAC as your Kubernetes RBAC backing store if the tenants are the same. Both cases still leverage integrated authentication between Azure AD and AKS, Azure RBAC simply elevates this control to Azure RBAC instead of direct yaml-based management within the cluster which usually will align better with your organization's governance strategy.

Azure RBAC [Preferred]

If you are using a single tenant for this walk-through, the cluster deployment step later will take care of the necessary role assignments for the groups created above. Specifically, in the above steps, you created the Azure AD security group cluster-ns-a0008-readers-bu0001a000800 that is going to be a namespace reader in namespace a0008 and the Azure AD security group cluster-admins-bu0001a000800 is going to contain cluster admins. Those group Object IDs will be associated to the 'Azure Kubernetes Service RBAC Reader' and 'Azure Kubernetes Service RBAC Cluster Admin' RBAC role respectively, scoped to their proper level within the cluster.

Using Azure RBAC as your authorization approach is ultimately preferred as it allows for the unified management and access control across Azure Resources, AKS, and Kubernetes resources. At the time of this writing there are four Azure RBAC roles that represent typical cluster access patterns.

Direct Kubernetes RBAC management [Alternative]

If you instead wish to not use Azure RBAC as your Kubernetes RBAC authorization mechanism, either due to the intentional use of disparate Azure AD tenants or another business justifications, you can then manage these RBAC assignments via direct ClusterRoleBinding/RoleBinding associations. This method is also useful when the four Azure RBAC roles are not granular enough for your desired permission model.

  1. Set up additional Kubernetes RBAC associations. Optional, fork required.

    📖 The team knows there will be more than just cluster admins that need group-managed access to the cluster. Out of the box, Kubernetes has other roles like admin, edit, and view which can also be mapped to Azure AD Groups for use both at namespace and at the cluster level. Likewise custom roles can be created which need to be mapped to Azure AD Groups.

    In the cluster-rbac.yaml file and the various namespaced rbac.yaml files, you can uncomment what you wish and replace the <replace-with-an-aad-group-object-id...> placeholders with corresponding new or existing Azure AD groups that map to their purpose for this cluster or namespace. You do not need to perform this action for this walkthrough; they are only here for your reference.

Save your work in-progress

# run the saveenv.sh script at any time to save environment variables created above to aks_baseline.env
./saveenv.sh

# if your terminal session gets reset, you can source the file to reload the environment variables
# source aks_baseline.env

Next step

▶️ Deploy the hub-spoke network topology