Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add updated examples for GKE and EKS #1115

Merged
merged 30 commits into from
Jan 21, 2021
Merged
Show file tree
Hide file tree
Changes from 28 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
8eb1b93
Add updated examples for GKE and EKS
dak1n1 Jan 11, 2021
9fe34ed
add helm provider to examples
dak1n1 Jan 12, 2021
38c421c
Add README for GKE
dak1n1 Jan 12, 2021
182bbe6
include instructions on generating kubeconfig
dak1n1 Jan 12, 2021
a8fabc6
working on EKS README
dak1n1 Jan 12, 2021
64f8ae0
update eks readme
dak1n1 Jan 12, 2021
fe4d748
update gke README
dak1n1 Jan 12, 2021
365c327
clarify readme and remove unneeded vars from eks
dak1n1 Jan 13, 2021
1462738
added link to EKS docs
dak1n1 Jan 13, 2021
d689f89
start adding AKS
dak1n1 Jan 13, 2021
ae739c6
validation passes
dak1n1 Jan 15, 2021
ac5be02
apply works
dak1n1 Jan 15, 2021
c038a7b
works in a single apply
dak1n1 Jan 16, 2021
946df5f
fix kubeconfig path
dak1n1 Jan 16, 2021
775a05a
figured out how to replace an AKS cluster
dak1n1 Jan 17, 2021
c3cfe97
update readme
dak1n1 Jan 17, 2021
9dac85a
update readmes
dak1n1 Jan 17, 2021
076030f
update readme
dak1n1 Jan 17, 2021
69751d6
update readme and add helm provider
dak1n1 Jan 18, 2021
b8997b7
update readme to remove jq
dak1n1 Jan 19, 2021
0cf0cf3
remove version until 2.0 is released
dak1n1 Jan 19, 2021
5276ddb
added more details to AKS readme
dak1n1 Jan 19, 2021
0e027b1
remove unneeded disk
dak1n1 Jan 20, 2021
1b017b3
minor fixes
dak1n1 Jan 20, 2021
40008c9
update or remove old examples
dak1n1 Jan 20, 2021
da99ae3
automate fixing formatting in examples
dak1n1 Jan 21, 2021
403e63b
add some version constraints
dak1n1 Jan 21, 2021
ee738a5
undo changes to test-infra
dak1n1 Jan 21, 2021
a2c0599
Add a sentence about replacing the clusters
dak1n1 Jan 21, 2021
c63ad91
fix error
dak1n1 Jan 21, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions GNUmakefile
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,18 @@ depscheck:
@git diff --exit-code -- vendor || \
(echo; echo "Unexpected difference in vendor/ directory. Run 'go mod vendor' command or revert any go.mod/go.sum/vendor changes and commit."; exit 1)

examples-lint: tools
@echo "==> Checking _examples dir formatting..."
@./scripts/fmt-examples.sh || (echo; \
echo "Terraform formatting errors found in _examples dir."; \
echo "To see the full differences, run: ./scripts/fmt-examples.sh diff"; \
echo "To automatically fix the formatting, run 'make examples-lint-fix' and commit the changes."; \
exit 1)

examples-lint-fix: tools
@echo "==> Fixing terraform formatting of _examples dir..."
@./scripts/fmt-examples.sh fix

fmt:
gofmt -w $(GOFMT_FILES)

Expand Down
60 changes: 60 additions & 0 deletions _examples/aks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# AKS (Azure Kubernetes Service)

This example shows how to use the Terraform Kubernetes Provider and Terraform Helm Provider to configure an AKS cluster. The example config in this directory builds the AKS cluster and applies the Kubernetes configurations in a single operation. This guide will also show you how to make changes to the underlying AKS cluster in such a way that Kuberntes/Helm resources are recreated after the underlying cluster is replaced.

You will need the following environment variables to be set:

- `ARM_SUBSCRIPTION_ID`
- `ARM_TENANT_ID`
- `ARM_CLIENT_ID`
- `ARM_CLIENT_SECRET`

Ensure that `KUBE_CONFIG_FILE` and `KUBE_CONFIG_FILES` environment variables are NOT set, as they will interfere with the cluster build.

```
unset KUBE_CONFIG_FILE
unset KUBE_CONFIG_FILES
```

To install the AKS cluster using default values, run terraform init and apply from the directory containing this README.

```
terraform init
terraform apply
```

## Kubeconfig for manual CLI access

This example generates a kubeconfig file in the current working directory, which can be used for manual CLI access to the cluster.

```
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get pods -n test
```

However, in a real-world scenario, this config file would have to be replaced periodically as the AKS client certificates eventually expire (see the [Azure documentation](https://docs.microsoft.com/en-us/azure/aks/certificate-rotation) for the exact expiry dates). If the certificates (or other authentication attributes) are replaced, run a targeted `terraform apply` to save the new credentials into state.

```
terraform plan -target=module.aks-cluster
terraform apply -target=module.aks-cluster
```

Once the targeted apply is finished, the Kubernetes and Helm providers will be available for use again. Run `terraform apply` again (without targeting) to apply any updates to Kubernetes resources.

```
terraform plan
terraform apply
```

This approach prevents the Kubernetes and Helm providers from attempting to use cached, invalid credentials, which would cause provider configuration errors durring the plan and apply phases.

## Replacing the AKS cluster and re-creating the Kubernetes / Helm resources

When the cluster is initially created, the Kubernetes and Helm providers will not be initialized until authentication details are created for the cluster. However, for future operations that may involve replacing the underlying cluster (for example, changing VM sizes), the AKS cluster will have to be targeted without the Kubernetes/Helm providers, as shown below. This is done by removing the `module.kubernetes-config` from Terraform State prior to replacing cluster credentials, to avoid passing outdated credentials into the providers.

This will create the new cluster and the Kubernetes resources in a single apply.

```
terraform state rm module.kubernetes-config
terraform apply
```
27 changes: 27 additions & 0 deletions _examples/aks/aks-cluster/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
resource "azurerm_resource_group" "test" {
name = var.cluster_name
location = var.location
}

resource "azurerm_kubernetes_cluster" "test" {
name = var.cluster_name
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
dns_prefix = var.cluster_name

default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_DS2_v2"
}

identity {
type = "SystemAssigned"
}
}

resource "local_file" "kubeconfig" {
content = azurerm_kubernetes_cluster.test.kube_config_raw
filename = "${path.root}/kubeconfig"
}

15 changes: 15 additions & 0 deletions _examples/aks/aks-cluster/output.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
output "client_cert" {
value = azurerm_kubernetes_cluster.test.kube_config.0.client_certificate
}

output "client_key" {
value = azurerm_kubernetes_cluster.test.kube_config.0.client_key
}

output "ca_cert" {
value = azurerm_kubernetes_cluster.test.kube_config.0.cluster_ca_certificate
}

output "endpoint" {
value = azurerm_kubernetes_cluster.test.kube_config.0.host
}
15 changes: 15 additions & 0 deletions _examples/aks/aks-cluster/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
variable "kubernetes_version" {
default = "1.18"
}

variable "workers_count" {
default = "3"
}

variable "cluster_name" {
type = string
}

variable "location" {
type = string
}
56 changes: 56 additions & 0 deletions _examples/aks/kubernetes-config/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
resource "kubernetes_namespace" "test" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to run terraform fmt on this file I think.

metadata {
name = "test"
}
}

resource "kubernetes_deployment" "test" {
metadata {
name = "test"
namespace= kubernetes_namespace.test.metadata.0.name
}
spec {
replicas = 2
selector {
match_labels = {
app = "test"
}
}
template {
metadata {
labels = {
app = "test"
}
}
spec {
container {
image = "nginx:1.19.4"
name = "nginx"

resources {
limits = {
memory = "512M"
cpu = "1"
}
requests = {
memory = "256M"
cpu = "50m"
}
}
}
}
}
}
}

resource helm_release nginx_ingress {
name = "nginx-ingress-controller"

repository = "https://charts.bitnami.com/bitnami"
chart = "nginx-ingress-controller"

set {
name = "service.type"
value = "ClusterIP"
}
}
3 changes: 3 additions & 0 deletions _examples/aks/kubernetes-config/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
variable "cluster_name" {
type = string
}
50 changes: 50 additions & 0 deletions _examples/aks/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "2.42"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.0.1"
}
}
}

provider "kubernetes" {
host = module.aks-cluster.endpoint
client_key = base64decode(module.aks-cluster.client_key)
client_certificate = base64decode(module.aks-cluster.client_cert)
cluster_ca_certificate = base64decode(module.aks-cluster.ca_cert)
}

provider "helm" {
kubernetes {
host = module.aks-cluster.endpoint
client_key = base64decode(module.aks-cluster.client_key)
client_certificate = base64decode(module.aks-cluster.client_cert)
cluster_ca_certificate = base64decode(module.aks-cluster.ca_cert)
}
}

provider "azurerm" {
features {}
}

module "aks-cluster" {
providers = { azurerm = azurerm }
source = "./aks-cluster"
cluster_name = local.cluster_name
location = var.location
}

module "kubernetes-config" {
providers = { kubernetes = kubernetes, helm = helm }
depends_on = [module.aks-cluster]
source = "./kubernetes-config"
cluster_name = local.cluster_name
}
7 changes: 7 additions & 0 deletions _examples/aks/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
output "kubeconfig_path" {
value = abspath("${path.root}/kubeconfig")
}

output "cluster_name" {
value = local.cluster_name
}
12 changes: 12 additions & 0 deletions _examples/aks/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
variable "location" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also need terraform fmt on this file

type = string
default = "westus2"
}

resource "random_id" "cluster_name" {
byte_length = 5
}

locals {
cluster_name = "tf-k8s-${random_id.cluster_name.hex}"
}
10 changes: 5 additions & 5 deletions _examples/certificate-signing-request/main.tf
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
resource "tls_private_key" "example" {
algorithm = "ECDSA"
rsa_bits = "4096"
rsa_bits = "4096"
}

resource "tls_cert_request" "example" {
Expand All @@ -19,7 +19,7 @@ resource "kubernetes_certificate_signing_request" "example" {
}
spec {
request = tls_cert_request.example.cert_request_pem
usages = ["client auth", "server auth"]
usages = ["client auth", "server auth"]
}
auto_approve = true
}
Expand All @@ -41,12 +41,12 @@ resource "kubernetes_pod" "main" {
}
spec {
container {
name = "default"
image = "alpine:latest"
name = "default"
image = "alpine:latest"
command = ["cat", "/etc/test/tls.crt"]
volume_mount {
mount_path = "/etc/test"
name = "secretvol"
name = "secretvol"
}
}
volume {
Expand Down
8 changes: 4 additions & 4 deletions _examples/certificate-signing-request/variables.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
variable example_user {
default = "admin"
variable "example_user" {
default = "admin"
}

variable example_org {
default = "example cluster"
variable "example_org" {
default = "example cluster"
}
68 changes: 68 additions & 0 deletions _examples/eks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# EKS (Amazon Elastic Kubernetes Service)

This example shows how to use the Terraform Kubernetes Provider and Terraform Helm Provider to configure an EKS cluster. The example config builds the EKS cluster and applies the Kubernetes configurations in a single operation.

You will need the following environment variables to be set:

- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`

See [AWS Provider docs](https://www.terraform.io/docs/providers/aws/index.html#configuration-reference) for more details about these variables and alternatives, like `AWS_PROFILE`.

Ensure that `KUBE_CONFIG_FILE` and `KUBE_CONFIG_FILES` environment variables are NOT set, as they will interfere with the cluster build.

```
unset KUBE_CONFIG_FILE
unset KUBE_CONFIG_FILES
```

To install the EKS cluster using default values, run terraform init and apply from the directory containing this README.

```
terraform init
terraform apply
```

## Kubeconfig for manual CLI access

This example generates a kubeconfig file in the current working directory. However, the token in this config expires in 15 minutes. The token can be refreshed by running `terraform apply` again. Export the KUBECONFIG to manually access the cluster:

```
terraform apply
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get pods -n test
```

## Optional variables

The Kubernetes version can be specified at apply time:

```
terraform apply -var=kubernetes_version=1.18
```

See https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html for currently available versions.


### Worker node count and instance type

The number of worker nodes, and the instance type, can be specified at apply time:

```
terraform apply -var=workers_count=4 -var=workers_type=m4.xlarge
```

## Additional configuration of EKS

To view all available configuration options for the EKS module used in this example, see [terraform-aws-modules/eks docs](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest).

## Replacing the EKS cluster and re-creating the Kubernetes / Helm resources

When the cluster is initially created, the Kubernetes and Helm providers will not be initialized until authentication details are created for the cluster. However, for future operations that may involve replacing the underlying cluster (for example, changing the network where the EKS cluster resides), the EKS cluster will have to be targeted without the Kubernetes/Helm providers, as shown below. This is done by removing the `module.kubernetes-config` from Terraform State prior to replacing cluster credentials, to avoid passing outdated credentials into the providers.

This will create the new cluster and the Kubernetes resources in a single apply.

```
terraform state rm module.kubernetes-config
terraform apply
```
Loading