Skip to content

Commit

Permalink
readme: update aks setup description (#250)
Browse files Browse the repository at this point in the history
Co-authored-by: Paul Meyer <[email protected]>
Co-authored-by: 3u13r <[email protected]>
  • Loading branch information
3 people authored Mar 19, 2024
1 parent 5746766 commit 89184c1
Show file tree
Hide file tree
Showing 2 changed files with 121 additions and 28 deletions.
53 changes: 42 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,34 +120,48 @@ confidential and deploying it together with Contrast.

### Prerequisite

A CoCo enabled cluster is required to run Contrast. Create it using the [`az`](https://docs.microsoft.com/en-us/cli/azure/) CLI:
A CoCo-enabled cluster is required to run Contrast. Create it using the [`az`](https://docs.microsoft.com/en-us/cli/azure/) CLI:

```sh
# Ensure you set this to an existing resource group in your subscription
azResourceGroup="ContrastDemo"
# Select the name for your AKS cluster
azClusterName="ContrastDemo"

az extension add \
--name aks-preview
--name aks-preview \
--allow-preview true

az extension update \
--name aks-preview \
--allow-preview true

az feature register --namespace "Microsoft.ContainerService" --name "KataCcIsolationPreview"
az feature show --namespace "Microsoft.ContainerService" --name "KataCcIsolationPreview"
az provider register -n Microsoft.ContainerService

az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--resource-group "$azResourceGroup" \
--name "$azClusterName" \
--kubernetes-version 1.29 \
--os-sku AzureLinux \
--node-vm-size Standard_DC4as_cc_v5 \
--node-count 1 \
--generate-ssh-keys

az aks nodepool add \
--resource-group myResourceGroup \
--resource-group "$azResourceGroup" \
--name nodepool2 \
--cluster-name myAKSCluster \
--cluster-name "$azClusterName" \
--mode System \
--node-count 1 \
--os-sku AzureLinux \
--node-vm-size Standard_DC4as_cc_v5 \
--workload-runtime KataCcIsolation

az aks get-credentials \
--resource-group myResourceGroup \
--name myAKSCluster
--resource-group "$azResourceGroup" \
--name "$azClusterName"
```

Check [Azure's deployment guide](https://learn.microsoft.com/en-us/azure/aks/deploy-confidential-containers-default-policy) for more detailed instructions.
Expand Down Expand Up @@ -260,12 +274,29 @@ in the manifest are also written to the directory.

### Communicate with Workloads

Connect to the workloads using the Coordinator's mesh root as a trusted CA certificate.
For example, with `curl`:
You can securely connect to the workloads using the Coordinator's `mesh-root.pem` as a trusted CA certificate.
First, expose the service on a public IP address via a LoadBalancer service:

```sh
kubectl patch svc ${MY_SERVICE} -p '{"spec": {"type": "LoadBalancer"}}'
timeout 30s bash -c 'until kubectl get service/${MY_SERVICE} --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do sleep 2 ; done'
lbip=$(kubectl get svc ${MY_SERVICE} -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl --cacert ./verify/mesh-root.pem "https://${lbip}:8443"
echo $lbip
```

Note: All workload certificates are created with a wildcard DNS entry. Since we are accessing the load balancer via IP, the SAN checks the certificate for IP entries in the SAN field. Since the certificate doesn't contain any IP entries as SAN, the validation fails.
Hence, with curl you need to skip the validation:

```sh
curl -k "https://${lbip}:443"
```

To validate the certificate with the `mesh-root.pem` locally, use `openssl` instead:

```sh
openssl s_client -showcerts -connect ${lbip}:443 </dev/null | sed -n -e '/-.BEGIN/,/-.END/ p' > certChain.pem
awk 'BEGIN {c=0;} /BEGIN CERT/{c++} { print > "cert." c ".pem"}' < certChain.pem
openssl verify -verbose -trusted verify/mesh-root.pem -- cert.1.pem
```

## Current limitations
Expand Down
96 changes: 79 additions & 17 deletions dev-docs/user-manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,51 @@ Kubernetes pods that are executed inside a confidential micro-VM and provide str
from the surrounding environment. This works with unmodified containers in a lift-and-shift approach.
It currently targets the [CoCo preview on AKS](https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-containers-on-aks-preview).

## The Contrast Coordinator
## Goal

Contrast is designed to keep all data always encrypted and to prevent access from the infrastructure layer, i.e., remove the infrastructure from the TCB. This includes access from datacenter employees, privileged cloud admins, own cluster administrators, and attackers coming through the infrastructure, e.g., malicious co-tenants escalating their privileges.

Contrast integrates fluently with the existing Kubernetes workflows. It's compatible with managed Kubernetes, can be installed as a day-2 operation and imposes only minimal changes to your deployment flow.

## Use Cases:

* Increasing the security of your containers
* Moving sensitive workloads from on-prem to the cloud with Confidential Computing
* Shielding the code and data even from the own cluster administrators
* Increasing the trustworthiness of your SaaS offerings
* Simplifying regulatory compliance
* Multi-party computation for data collaboration

## Features

### 🔒 Everything always encrypted

* Runtime encryption: All Pods run inside AMD SEV-based Confidential VMs (CVMs). Support for Intel TDX will be added in the future.
* PKI and mTLS: All pod-to-pod traffic can be encrypted and authenticated with Contrast's workload certificates.

### 🔍 Everything verifiable

* Workload attestation based on the identity of your container and the remote-attestation feature of [Confidential Containers](https://github.com/confidential-containers)
* "Whole deployment" attestation based on Contrast's [Coordinator attestation service](#the-contrast-coordinator)
* Runtime environment integrity verification based runtime policies
* Kata micro-VMs and single workload isolation provide a minimal Trusted Computing Base (TCB)

### 🏝️ Everything isolated

* Runtime policies enforce strict isolation of your containers from the Kubernetes layer and the infrastructure.
* Pod isolation: Pods are isolated from each other.
* Namespace isolation: Contrast can be deployed independently in multiple namespaces.

### 🧩 Lightweight and easy to use

* Install in Kubernetes cluster as a day-2 operation.
* Compatible with managed Kubernetes.
* Minimal DevOps involvement.
* Simple CLI tool to get started.

## Components

### The Contrast Coordinator

The Contrast Coordinator is the central remote attestation service of a Contrast deployment.
It runs inside a confidential container inside your cluster.
Expand All @@ -22,7 +66,7 @@ As your app needs to scale, the Coordinator transparently verifies new instances
To verify your deployment, the Coordinator's remote attestation statement combined with the manifest offers a concise single remote attestation statement for your entire deployment.
A third party can use this to verify the integrity of your distributed app, making it easy to assure stakeholders of your app's identity and integrity.

## The Manifest
### The Manifest

The manifest is the configuration file for the Coordinator, defining your confidential deployment.
It is automatically generated from your deployment by the Contrast CLI.
Expand All @@ -32,7 +76,7 @@ It currently consists of the following parts:
* *Reference Values*: The remote attestation reference values for the Kata confidential micro-VM that is the runtime environment of your Pods.
* *WorkloadOwnerKeyDigest*: The workload owner's public key digest. Used for authenticating subsequent manifest updates.

## Runtime Policies
### Runtime Policies

Runtime Policies are a mechanism to enable the use of the (untrusted) Kubernetes API for orchestration while ensuring the confidentiality and integrity of your confidential containers.
They allow us to enforce the integrity of your containers' runtime environment as defined in your deployment files.
Expand All @@ -53,7 +97,7 @@ The trust chain goes as follows:

After the last step, we know that the policy has not been tampered with and, thus, that the workload is as intended.

## The Contrast Initializer
### The Contrast Initializer

Contrast provides an Initializer that handles the remote attestation on the workload side transparently and
fetches the workload certificate. The Initializer runs as an init container before your workload is started.
Expand All @@ -73,31 +117,45 @@ az login
Create an AKS cluster with Confidential Container support:

```sh
# Ensure you set this to an existing resource group in your subscription
azResourceGroup="ContrastDemo"
# Select the name for your AKS cluster
azClusterName="ContrastDemo"

az extension add \
--name aks-preview
--name aks-preview \
--allow-preview true

az extension update \
--name aks-preview \
--allow-preview true

az feature register --namespace "Microsoft.ContainerService" --name "KataCcIsolationPreview"
az feature show --namespace "Microsoft.ContainerService" --name "KataCcIsolationPreview"
az provider register -n Microsoft.ContainerService

az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--resource-group "$azResourceGroup" \
--name "$azClusterName" \
--kubernetes-version 1.29 \
--os-sku AzureLinux \
--node-vm-size Standard_DC4as_cc_v5 \
--node-count 1 \
--generate-ssh-keys

az aks nodepool add \
--resource-group myResourceGroup \
--resource-group "$azResourceGroup" \
--name nodepool2 \
--cluster-name myAKSCluster \
--cluster-name "$azClusterName" \
--mode System \
--node-count 1 \
--os-sku AzureLinux \
--node-vm-size Standard_DC4as_cc_v5 \
--workload-runtime KataCcIsolation

az aks get-credentials \
--resource-group myResourceGroup \
--name myAKSCluster
--resource-group "$azResourceGroup" \
--name "$azClusterName"
```

### Download the latest Contrast release
Expand Down Expand Up @@ -164,20 +222,24 @@ also written into the same directory.

### Connect and verify the workload

Connect to the workloads using the Coordinator's `mesh-root.pem` as a trusted CA certificate. For example, with curl:
You can securely connect to the workloads using the Coordinator's `mesh-root.pem` as a trusted CA certificate.
First, expose the service on a public IP address via a LoadBalancer service:

```sh
kubectl patch svc web-svc -p '{"spec": {"type": "LoadBalancer"}}'
timeout 30s bash -c 'until kubectl get service/web-svc --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do sleep 2 ; done'
lbip=$(kubectl get svc web-svc -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $lbip
curl --cacert ./verify/mesh-root.pem -k "https://${lbip}"
```

The workload certificate is a DNS wildcard certificate. Therefore, SAN is expected to fail when accessing the workload via an IP address.
On Azure, all load balancers automatically get ephemeral DNS entries, so either
use that or configure DNS yourself.
Note: All workload certificates are created with a wildcard DNS entry. Since we are accessing the load balancer via IP, the SAN checks the certificate for IP entries in the SAN field. Since the certificate doesn't contain any IP entries as SAN, the validation fails.
Hence, with curl you need to skip the validation:

```sh
curl -k "https://${lbip}:443"
```

To validate the certificate locally, use `openssl`:
To validate the certificate with the `mesh-root.pem` locally, use `openssl` instead:

```sh
openssl s_client -showcerts -connect ${lbip}:443 </dev/null | sed -n -e '/-.BEGIN/,/-.END/ p' > certChain.pem
Expand Down

0 comments on commit 89184c1

Please sign in to comment.