diff --git a/README.md b/README.md index 4288ace5b4..42fe366570 100644 --- a/README.md +++ b/README.md @@ -5,10 +5,12 @@ Terraform logo -- [Getting Started](https://learn.hashicorp.com/terraform?track=kubernetes#kubernetes) -- Usage - - [Documentation](https://www.terraform.io/docs/providers/kubernetes/index.html) +- [Getting Started](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/guides/getting-started) +- [Interactive Tutorial](https://learn.hashicorp.com/collections/terraform/kubernetes) +- Usage + - [Documentation](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) - [Examples](https://github.com/hashicorp/terraform-provider-kubernetes/tree/master/_examples) + - [Kubernetes Provider 2.0 Upgrade guide](https://registry.terraform.io/providers/hashicorp/kubernetes/docs/guides/v2-upgrade-guide) - Mailing list: [Google Groups](http://groups.google.com/group/terraform-tool) - Chat: [#terraform-providers in Kubernetes](https://kubernetes.slack.com/messages/CJY6ATQH4) ([Sign up here](http://slack.k8s.io/)) - Roadmap: [Q3 2020](_about/ROADMAP.md) diff --git a/website/docs/guides/getting-started.html.markdown b/website/docs/guides/getting-started.html.markdown index 36c5dad178..0e57f32ebd 100644 --- a/website/docs/guides/getting-started.html.markdown +++ b/website/docs/guides/getting-started.html.markdown @@ -2,9 +2,8 @@ layout: "kubernetes" page_title: "Kubernetes: Getting Started with Kubernetes provider" description: |- - This guide focuses on scheduling Kubernetes resources like Pods, - Replication Controllers, Services etc. on top of a properly configured - and running Kubernetes cluster. + This guide focuses on configuring authentication to your existing Kubernetes + cluster so that resources can be managed using the Kubernetes provider for Terraform. --- # Getting Started with Kubernetes provider @@ -14,20 +13,15 @@ description: |- -> Visit the [Manage Kubernetes Resources via Terraform](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/kubernetes&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) Learn tutorial for an interactive getting started experience. -[Kubernetes](https://kubernetes.io/) (K8S) is an open-source workload scheduler -with focus on containerized applications. +[Kubernetes](https://kubernetes.io/) (K8S) is an open-source workload scheduler with focus on containerized applications. There are at least 2 steps involved in scheduling your first container on a Kubernetes cluster. You need the Kubernetes cluster with all its components -running _somewhere_ and then schedule the Kubernetes resources, like Pods, -Replication Controllers, Services etc. +running _somewhere_ and then define the Kubernetes resources, such as Deployments, Services, etc. This guide focuses mainly on the latter part and expects you to have a properly configured & running Kubernetes cluster. -The guide also expects you to run the cluster on a cloud provider -where Kubernetes can automatically provision a load balancer. - ## Why Terraform While you could use `kubectl` or similar CLI-based tools mapped to API calls @@ -54,179 +48,182 @@ orchestration with Terraform presents a few benefits. ## Provider Setup -The easiest way to configure the provider is by creating/generating a config -in a default location (`~/.kube/config`). That allows you -to leave the provider block completely empty. - -```hcl -provider "kubernetes" {} -``` - -If running in-cluster with an appropriate service account token available, you -just need to disable config file loading: +The provider needs to be configured with the proper credentials before it can be used. The simplest configuration is to specify the kubeconfig path: ```hcl provider "kubernetes" { - load_config_file = "false" + config_path = "~/.kube/config" } ``` -If you wish to configure the provider statically you can do so by providing TLS certificates: +Another configuration option is to **statically** define TLS certificate credentials: ```hcl provider "kubernetes" { host = "https://104.196.242.174" - client_certificate = file("~/.kube/client-cert.pem") - client_key = file("~/.kube/client-key.pem") - cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem") - - load_config_file = false # when you wish not to load the local config file + client_certificate = "${file("~/.kube/client-cert.pem")}" + client_key = "${file("~/.kube/client-key.pem")}" + cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}" } ``` -or by providing username and password (HTTP Basic Authorization): +Static TLS certficate credentials are present in Azure AKS clusters by default, and can be used with the [azurerm_kubernetes_cluster](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster) data source as shown below. This will automatically read the certficate information from the AKS cluster and pass it to the Kubernetes provider. ```hcl -provider "kubernetes" { - host = "https://104.196.242.174" - - username = "ClusterMaster" - password = "MindTheGap" - - load_config_file = false # when you wish not to load the local config file +data "azurerm_kubernetes_cluster" "example" { + name = "myakscluster" + resource_group_name = "my-example-resource-group" } -``` - -After specifying the provider we may now run the following command -to download the latest version of the Kubernetes provider. +provider "kubernetes" { + host = "${data.azurerm_kubernetes_cluster.main.kube_config.0.host}" + client_certificate = "${base64decode(data.azurerm_kubernetes_cluster.main.kube_config.0.client_certificate)}" + client_key = "${base64decode(data.azurerm_kubernetes_cluster.main.kube_config.0.client_key)}" + cluster_ca_certificate = "${base64decode(data.azurerm_kubernetes_cluster.main.kube_config.0.cluster_ca_certificate)}" +} ``` -$ terraform init - -Initializing the backend... - -Initializing provider plugins... -- Checking for available provider plugins... -- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.10.1... -The following providers do not have any version constraints in configuration, -so the latest version was installed. +Another option is to use an oauth token, such as this example from a GKE cluster. The [google_client_config](https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/client_config) data source fetches a token from the Google Authorization server, which expires in 1 hour by default. -To prevent automatic upgrades to new major versions that may contain breaking -changes, it is recommended to add version = "..." constraints to the -corresponding provider blocks in configuration, with the constraint strings -suggested below. - -* provider.kubernetes: version = "~> 1.10.1" +```hcl +data "google_client_config" "default" {} +data "google_container_cluster" "my_cluster" { + name = "my-cluster" + zone = "us-east1-a" +} -Terraform has been successfully initialized! +provider "kubernetes" { + host = "https://${data.google_container_cluster.my_cluster.endpoint}" + token = data.google_client_config.default.access_token + cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate) +} +``` -You may now begin working with Terraform. Try running "terraform plan" to see -any changes that are required for your infrastructure. All Terraform commands -should now work. +For short-lived authentication tokens, like those found in EKS, which [expire in 15 minutes](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#controlling-access-to-eks-clusters), an exec-based credential plugin can be used to ensure the token is always up to date: -If you ever set or change modules or backend configuration for Terraform, -rerun this command to reinitialize your working directory. If you forget, other -commands will detect it and remind you to do so if necessary. +```hcl +data "aws_eks_cluster" "example" { + name = "example" +} +data "aws_eks_cluster_auth" "example" { + name = "example" +} +provider "kubernetes" { + host = data.aws_eks_cluster.example.endpoint + cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority[0].data) + exec { + api_version = "client.authentication.k8s.io/v1alpha1" + args = ["eks", "get-token", "--cluster-name", var.cluster_name] + command = "aws" + } +} ``` -## Scheduling a Simple Application - -The main object in any Kubernetes application is [a Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod). -Pod consists of one or more containers that are placed -on cluster nodes based on CPU or memory availability. +## Creating your first Kubernetes resources -Here we create a pod with a single container running the nginx web server, -exposing port 80 (HTTP) which can be then exposed -through the load balancer to the real user. +Once the provider is configured, you can apply the Kubernetes resources defined in you Terraform config file. The following is an example Terraform config file containing a few Kubernetes resources. We'll use [minikube](https://minikube.sigs.k8s.io/docs/start/) for the Kubernetes cluster in this example, but any Kubernetes cluster can be used. Ensure that a Kubernetes cluster of some kind is running before applying the example config below. -Unlike in this simple example you'd commonly run more than -a single instance of your application in production to reach -high availability and adding labels will allow Kubernetes to find all -pods (instances) for the purpose of forwarding the traffic -to the exposed port. +This configuration will create a scalable Nginx Deployment with 2 replicas. It will expose the Nginx frontend using a Service of type NodePort, which will make Nginx accessible via the public IP of the node running the containers. ```hcl -resource "kubernetes_pod" "nginx" { - metadata { - name = "nginx-example" - labels = { - App = "nginx" +terraform { + required_providers { + kubernetes = { + source = "hashicorp/kubernetes" + version = ">= 2.0.0" } } - +} +provider "kubernetes" { + config_path = "~/.kube/config" +} +resource "kubernetes_namespace" "test" { + metadata { + name = "nginx" + } +} +resource "kubernetes_deployment" "test" { + metadata { + name = "nginx" + namespace = kubernetes_namespace.test.metadata.0.name + } spec { - container { - image = "nginx:1.7.8" - name = "example" - - port { - container_port = 80 + replicas = 2 + selector { + match_labels = { + app = "MyTestApp" + } + } + template { + metadata { + labels = { + app = "MyTestApp" + } + } + spec { + container { + image = "nginx" + name = "nginx-container" + port { + container_port = 80 + } + } } } } } -``` - -The simplest way to expose your application to users is via [Service](https://kubernetes.io/docs/concepts/services-networking/service/). -Service is capable of provisioning a load-balancer in some cloud providers -and managing the relationship between pods and that load balancer -as new pods are launched and others die for any reason. - -```hcl -resource "kubernetes_service" "nginx" { +resource "kubernetes_service" "test" { metadata { - name = "nginx-example" + name = "nginx" + namespace = kubernetes_namespace.test.metadata.0.name } spec { selector = { - App = kubernetes_pod.nginx.metadata[0].labels.App + app = kubernetes_deployment.test.spec.0.template.0.metadata.0.labels.app } + type = "NodePort" port { + node_port = 30201 port = 80 target_port = 80 } - - type = "LoadBalancer" } } ``` -We may also add an output which will expose the IP address to the user +Use `terraform init` to download the specified version of the Kubernetes provider: -```hcl -output "lb_ip" { - value = kubernetes_service.nginx.load_balancer_ingress[0].ip -} ``` +$ terraform init -Please note that this assumes a cloud provider provisioning IP-based -load balancer (like in Google Cloud Platform). If you run on a provider -with hostname-based load balancer (like in Amazon Web Services) you -should use the following snippet instead. +Initializing the backend... -```hcl -output "lb_ip" { - value = kubernetes_service.nginx.load_balancer_ingress[0].hostname -} -``` +Initializing provider plugins... +- Finding hashicorp/kubernetes versions matching "2.0"... +- Installing hashicorp/kubernetes v2.0... +- Installed hashicorp/kubernetes v2.0 (unauthenticated) -The plan will provide you an overview of planned changes, in this case -we should see 2 resources (Pod + Service) being added. -This commands gets more useful as your infrastructure grows and -becomes more complex with more components depending on each other -and it's especially helpful during updates. +Terraform has created a lock file .terraform.lock.hcl to record the provider +selections it made above. Include this file in your version control repository +so that Terraform can guarantee to make the same selections by default when +you run "terraform init" in the future. -``` -$ terraform plan +Terraform has been successfully initialized! -Refreshing Terraform state in-memory prior to plan... -The refreshed state will be used to calculate this plan, but will not be -persisted to local or remote state storage. +You may now begin working with Terraform. Try running "terraform plan" to see +any changes that are required for your infrastructure. All Terraform commands +should now work. +If you ever set or change modules or backend configuration for Terraform, +rerun this command to reinitialize your working directory. If you forget, other +commands will detect it and remind you to do so if necessary. +``` ------------------------------------------------------------------------- +Next, use `terraform plan` to display a list of resources to be created, and highlight any possible unknown attributes at apply time. For Deployments, all disk options are shown at plan time, but none will be created unless explicitly configured in the Deployment resource. + +``` +$ terraform plan An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: @@ -234,270 +231,411 @@ Resource actions are indicated with the following symbols: Terraform will perform the following actions: - # kubernetes_pod.nginx will be created - + resource "kubernetes_pod" "nginx" { - + id = (known after apply) + # kubernetes_deployment.test will be created + + resource "kubernetes_deployment" "test" { + + id = (known after apply) + + wait_for_rollout = true + metadata { + generation = (known after apply) - + labels = { - + "App" = "nginx" - } - + name = "nginx-example" - + namespace = "default" + + name = "nginx" + + namespace = "nginx" + resource_version = (known after apply) + self_link = (known after apply) + uid = (known after apply) } + spec { - + automount_service_account_token = true - + dns_policy = "ClusterFirst" - + enable_service_links = false - + host_ipc = false - + host_network = false - + host_pid = false - + hostname = (known after apply) - + node_name = (known after apply) - + restart_policy = "Always" - + service_account_name = (known after apply) - + share_process_namespace = false - + termination_grace_period_seconds = 30 - - + container { - + image = "nginx:1.7.8" - + image_pull_policy = (known after apply) - + name = "example" - + stdin = false - + stdin_once = false - + termination_message_path = "/dev/termination-log" - + tty = false - - + port { - + container_port = 80 - + protocol = "TCP" + + min_ready_seconds = 0 + + paused = false + + progress_deadline_seconds = 600 + + replicas = "2" + + revision_history_limit = 10 + + + selector { + + match_labels = { + + "app" = "MyTestApp" } + } - + resources { - + limits = (known after apply) - + requests = (known after apply) - } + + strategy { + + type = (known after apply) - + volume_mount { - + mount_path = (known after apply) - + name = (known after apply) - + read_only = (known after apply) - + sub_path = (known after apply) + + rolling_update { + + max_surge = (known after apply) + + max_unavailable = (known after apply) } } - + image_pull_secrets { - + name = (known after apply) - } - - + volume { - + name = (known after apply) - - + aws_elastic_block_store { - + fs_type = (known after apply) - + partition = (known after apply) - + read_only = (known after apply) - + volume_id = (known after apply) + + template { + + metadata { + + generation = (known after apply) + + labels = { + + "app" = "MyTestApp" + } + + name = (known after apply) + + resource_version = (known after apply) + + self_link = (known after apply) + + uid = (known after apply) } - + azure_disk { - + caching_mode = (known after apply) - + data_disk_uri = (known after apply) - + disk_name = (known after apply) - + fs_type = (known after apply) - + read_only = (known after apply) - } + + spec { + + automount_service_account_token = true + + dns_policy = "ClusterFirst" + + enable_service_links = true + + host_ipc = false + + host_network = false + + host_pid = false + + hostname = (known after apply) + + node_name = (known after apply) + + restart_policy = "Always" + + service_account_name = (known after apply) + + share_process_namespace = false + + termination_grace_period_seconds = 30 + + + container { + + image = "nginx" + + image_pull_policy = (known after apply) + + name = "nginx-container" + + stdin = false + + stdin_once = false + + termination_message_path = "/dev/termination-log" + + termination_message_policy = (known after apply) + + tty = false + + + port { + + container_port = 80 + + protocol = "TCP" + } - + azure_file { - + read_only = (known after apply) - + secret_name = (known after apply) - + share_name = (known after apply) - } + + resources { + + limits = (known after apply) + + requests = (known after apply) + } - + ceph_fs { - + monitors = (known after apply) - + path = (known after apply) - + read_only = (known after apply) - + secret_file = (known after apply) - + user = (known after apply) + + volume_mount { + + mount_path = (known after apply) + + mount_propagation = (known after apply) + + name = (known after apply) + + read_only = (known after apply) + + sub_path = (known after apply) + } + } - + secret_ref { + + image_pull_secrets { + name = (known after apply) } - } - + cinder { - + fs_type = (known after apply) - + read_only = (known after apply) - + volume_id = (known after apply) - } + + readiness_gate { + + condition_type = (known after apply) + } - + config_map { - + default_mode = (known after apply) - + name = (known after apply) + + volume { + + name = (known after apply) - + items { - + key = (known after apply) - + mode = (known after apply) - + path = (known after apply) - } - } + + aws_elastic_block_store { + + fs_type = (known after apply) + + partition = (known after apply) + + read_only = (known after apply) + + volume_id = (known after apply) + } - + downward_api { - + default_mode = (known after apply) + + azure_disk { + + caching_mode = (known after apply) + + data_disk_uri = (known after apply) + + disk_name = (known after apply) + + fs_type = (known after apply) + + kind = (known after apply) + + read_only = (known after apply) + } - + items { - + mode = (known after apply) - + path = (known after apply) + + azure_file { + + read_only = (known after apply) + + secret_name = (known after apply) + + share_name = (known after apply) + } - + field_ref { - + api_version = (known after apply) - + field_path = (known after apply) + + ceph_fs { + + monitors = (known after apply) + + path = (known after apply) + + read_only = (known after apply) + + secret_file = (known after apply) + + user = (known after apply) + + + secret_ref { + + name = (known after apply) + + namespace = (known after apply) + } } - + resource_field_ref { - + container_name = (known after apply) - + quantity = (known after apply) - + resource = (known after apply) + + cinder { + + fs_type = (known after apply) + + read_only = (known after apply) + + volume_id = (known after apply) } - } - } - + empty_dir { - + medium = (known after apply) - } + + config_map { + + default_mode = (known after apply) + + name = (known after apply) + + optional = (known after apply) - + fc { - + fs_type = (known after apply) - + lun = (known after apply) - + read_only = (known after apply) - + target_ww_ns = (known after apply) - } + + items { + + key = (known after apply) + + mode = (known after apply) + + path = (known after apply) + } + } - + flex_volume { - + driver = (known after apply) - + fs_type = (known after apply) - + options = (known after apply) - + read_only = (known after apply) + + csi { + + driver = (known after apply) + + fs_type = (known after apply) + + read_only = (known after apply) + + volume_attributes = (known after apply) + + volume_handle = (known after apply) + + + controller_expand_secret_ref { + + name = (known after apply) + + namespace = (known after apply) + } + + + controller_publish_secret_ref { + + name = (known after apply) + + namespace = (known after apply) + } + + + node_publish_secret_ref { + + name = (known after apply) + + namespace = (known after apply) + } + + + node_stage_secret_ref { + + name = (known after apply) + + namespace = (known after apply) + } + } - + secret_ref { - + name = (known after apply) - } - } + + downward_api { + + default_mode = (known after apply) - + flocker { - + dataset_name = (known after apply) - + dataset_uuid = (known after apply) - } + + items { + + mode = (known after apply) + + path = (known after apply) - + gce_persistent_disk { - + fs_type = (known after apply) - + partition = (known after apply) - + pd_name = (known after apply) - + read_only = (known after apply) - } + + field_ref { + + api_version = (known after apply) + + field_path = (known after apply) + } - + git_repo { - + directory = (known after apply) - + repository = (known after apply) - + revision = (known after apply) - } + + resource_field_ref { + + container_name = (known after apply) + + divisor = (known after apply) + + resource = (known after apply) + } + } + } - + glusterfs { - + endpoints_name = (known after apply) - + path = (known after apply) - + read_only = (known after apply) - } + + empty_dir { + + medium = (known after apply) + + size_limit = (known after apply) + } - + host_path { - + path = (known after apply) - } + + fc { + + fs_type = (known after apply) + + lun = (known after apply) + + read_only = (known after apply) + + target_ww_ns = (known after apply) + } - + iscsi { - + fs_type = (known after apply) - + iqn = (known after apply) - + iscsi_interface = (known after apply) - + lun = (known after apply) - + read_only = (known after apply) - + target_portal = (known after apply) - } + + flex_volume { + + driver = (known after apply) + + fs_type = (known after apply) + + options = (known after apply) + + read_only = (known after apply) - + local { - + path = (known after apply) - } + + secret_ref { + + name = (known after apply) + + namespace = (known after apply) + } + } - + nfs { - + path = (known after apply) - + read_only = (known after apply) - + server = (known after apply) - } + + flocker { + + dataset_name = (known after apply) + + dataset_uuid = (known after apply) + } - + persistent_volume_claim { - + claim_name = (known after apply) - + read_only = (known after apply) - } + + gce_persistent_disk { + + fs_type = (known after apply) + + partition = (known after apply) + + pd_name = (known after apply) + + read_only = (known after apply) + } - + photon_persistent_disk { - + fs_type = (known after apply) - + pd_id = (known after apply) - } + + git_repo { + + directory = (known after apply) + + repository = (known after apply) + + revision = (known after apply) + } - + quobyte { - + group = (known after apply) - + read_only = (known after apply) - + registry = (known after apply) - + user = (known after apply) - + volume = (known after apply) - } + + glusterfs { + + endpoints_name = (known after apply) + + path = (known after apply) + + read_only = (known after apply) + } - + rbd { - + ceph_monitors = (known after apply) - + fs_type = (known after apply) - + keyring = (known after apply) - + rados_user = (known after apply) - + rbd_image = (known after apply) - + rbd_pool = (known after apply) - + read_only = (known after apply) + + host_path { + + path = (known after apply) + + type = (known after apply) + } - + secret_ref { - + name = (known after apply) - } - } + + iscsi { + + fs_type = (known after apply) + + iqn = (known after apply) + + iscsi_interface = (known after apply) + + lun = (known after apply) + + read_only = (known after apply) + + target_portal = (known after apply) + } - + secret { - + default_mode = (known after apply) - + optional = (known after apply) - + secret_name = (known after apply) + + local { + + path = (known after apply) + } - + items { - + key = (known after apply) - + mode = (known after apply) - + path = (known after apply) - } - } + + nfs { + + path = (known after apply) + + read_only = (known after apply) + + server = (known after apply) + } + + + persistent_volume_claim { + + claim_name = (known after apply) + + read_only = (known after apply) + } + + + photon_persistent_disk { + + fs_type = (known after apply) + + pd_id = (known after apply) + } + + + projected { + + default_mode = (known after apply) + + + sources { + + config_map { + + name = (known after apply) + + optional = (known after apply) + + + items { + + key = (known after apply) + + mode = (known after apply) + + path = (known after apply) + } + } + + + downward_api { + + items { + + mode = (known after apply) + + path = (known after apply) + + + field_ref { + + api_version = (known after apply) + + field_path = (known after apply) + } + + + resource_field_ref { + + container_name = (known after apply) + + quantity = (known after apply) + + resource = (known after apply) + } + } + } + + + secret { + + name = (known after apply) + + optional = (known after apply) + + + items { + + key = (known after apply) + + mode = (known after apply) + + path = (known after apply) + } + } + + + service_account_token { + + audience = (known after apply) + + expiration_seconds = (known after apply) + + path = (known after apply) + } + } + } + + + quobyte { + + group = (known after apply) + + read_only = (known after apply) + + registry = (known after apply) + + user = (known after apply) + + volume = (known after apply) + } + + + rbd { + + ceph_monitors = (known after apply) + + fs_type = (known after apply) + + keyring = (known after apply) + + rados_user = (known after apply) + + rbd_image = (known after apply) + + rbd_pool = (known after apply) + + read_only = (known after apply) + + + secret_ref { + + name = (known after apply) + + namespace = (known after apply) + } + } + + + secret { + + default_mode = (known after apply) + + optional = (known after apply) + + secret_name = (known after apply) + + + items { + + key = (known after apply) + + mode = (known after apply) + + path = (known after apply) + } + } - + vsphere_volume { - + fs_type = (known after apply) - + volume_path = (known after apply) + + vsphere_volume { + + fs_type = (known after apply) + + volume_path = (known after apply) + } + } } } } } - # kubernetes_service.nginx will be created - + resource "kubernetes_service" "nginx" { - + id = (known after apply) - + load_balancer_ingress = (known after apply) + # kubernetes_namespace.test will be created + + resource "kubernetes_namespace" "test" { + + id = (known after apply) + + + metadata { + + generation = (known after apply) + + name = "nginx" + + resource_version = (known after apply) + + self_link = (known after apply) + + uid = (known after apply) + } + } + + # kubernetes_service.test will be created + + resource "kubernetes_service" "test" { + + id = (known after apply) + + status = (known after apply) + + wait_for_load_balancer = true + metadata { + generation = (known after apply) - + name = "nginx-example" - + namespace = "default" + + name = "nginx" + + namespace = "nginx" + resource_version = (known after apply) + self_link = (known after apply) + uid = (known after apply) @@ -506,15 +644,16 @@ Terraform will perform the following actions: + spec { + cluster_ip = (known after apply) + external_traffic_policy = (known after apply) + + health_check_node_port = (known after apply) + publish_not_ready_addresses = false + selector = { - + "App" = "nginx" + + "app" = "MyTestApp" } + session_affinity = "None" - + type = "LoadBalancer" + + type = "NodePort" + port { - + node_port = (known after apply) + + node_port = 30201 + port = 80 + protocol = "TCP" + target_port = "80" @@ -522,7 +661,7 @@ Terraform will perform the following actions: } } -Plan: 2 to add, 0 to change, 0 to destroy. +Plan: 3 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ @@ -531,285 +670,81 @@ can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run. ``` -As we're happy with the plan output we may carry on applying -proposed changes. `terraform apply` will take of all the hard work -which includes creating resources via API in the right order, -supplying any defaults as necessary and waiting for -resources to finish provisioning to the point when it can either -present useful attributes or a failure (with reason) to the user. +Use `terraform apply` to create the resources shown above. ``` -$ terraform apply -auto-approve - -kubernetes_pod.nginx: Creating... -kubernetes_pod.nginx: Creation complete after 8s [id=default/nginx-example] -kubernetes_service.nginx: Creating... -kubernetes_service.nginx: Still creating... [10s elapsed] -kubernetes_service.nginx: Still creating... [20s elapsed] -kubernetes_service.nginx: Still creating... [30s elapsed] -kubernetes_service.nginx: Still creating... [40s elapsed] -kubernetes_service.nginx: Still creating... [50s elapsed] -kubernetes_service.nginx: Creation complete after 56s [id=default/nginx-example] - -Apply complete! Resources: 2 added, 0 changed, 0 destroyed. +$ terraform apply --auto-approve -Outputs: +kubernetes_namespace.test: Creating... +kubernetes_namespace.test: Creation complete after 0s [id=nginx] +kubernetes_deployment.test: Creating... +kubernetes_deployment.test: Creation complete after 7s [id=nginx/nginx] +kubernetes_service.test: Creating... +kubernetes_service.test: Creation complete after 0s [id=nginx/nginx] -lb_ip = 34.77.88.233 +Apply complete! Resources: 3 added, 0 changed, 0 destroyed. ``` -You may now enter that IP address to your favourite browser -and you should see the nginx welcome page. - -The [Kubernetes UI](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) -provides another way to check both the pod and the service there -once they're scheduled. - -## Reaching Scalability and Availability - -The Replication Controller allows you to replicate pods. This is useful -for maintaining overall availability and scalability of your application -exposed to the user. - -We can just replace our Pod with RC from the previous config -and keep the Service there. +The resources are now visible in the Kubernetes cluster. -```hcl -resource "kubernetes_deployment" "nginx" { - metadata { - name = "scalable-nginx-example" - labels = { - App = "ScalableNginxExample" - } - } - - spec { - replicas = 2 - selector { - match_labels = { - App = "ScalableNginxExample" - } - } - template { - metadata { - labels = { - App = "ScalableNginxExample" - } - } - spec { - container { - image = "nginx:1.7.8" - name = "example" - - port { - container_port = 80 - } - - resources { - limits = { - cpu = "0.5" - memory = "512Mi" - } - requests = { - cpu = "250m" - memory = "50Mi" - } - } - } - } - } - } -} - -resource "kubernetes_service" "nginx" { - metadata { - name = "nginx-example" - } - spec { - selector = { - App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App - } - port { - port = 80 - target_port = 80 - } - - type = "LoadBalancer" - } -} - -output "lb_ip" { - value = kubernetes_service.nginx.load_balancer_ingress[0].ip -} ``` +$ kubectl get all -n nginx -You may notice we also specified how much CPU and memory do we expect -single instance of that application to consume. This is incredibly -helpful for Kubernetes as it helps avoiding under-provisioning or over-provisioning -that would result in either unused resources (costing money) or lack -of resources (causing the app to crash or slow down). +NAME READY STATUS RESTARTS AGE +pod/nginx-86c669bff4-8g7g2 1/1 Running 0 38s +pod/nginx-86c669bff4-zgjkv 1/1 Running 0 38s -``` -$ terraform plan - -# ... +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/nginx NodePort 10.109.205.23 80:30201/TCP 30s -Plan: 2 to add, 0 to change, 0 to destroy. - ------------------------------------------------------------------------- -# ... +NAME READY UP-TO-DATE AVAILABLE AGE +deployment.apps/nginx 2/2 2 2 38s +NAME DESIRED CURRENT READY AGE +replicaset.apps/nginx-86c669bff4 2 2 2 38s ``` -``` -$ terraform apply -auto-approve -kubernetes_deployment.nginx: Creating... -kubernetes_deployment.nginx: Creation complete after 10s [id=default/scalable-nginx-example] -kubernetes_service.nginx: Creating... -kubernetes_service.nginx: Still creating... [10s elapsed] -kubernetes_service.nginx: Still creating... [20s elapsed] -kubernetes_service.nginx: Still creating... [30s elapsed] -kubernetes_service.nginx: Still creating... [40s elapsed] -kubernetes_service.nginx: Still creating... [50s elapsed] -kubernetes_service.nginx: Creation complete after 59s [id=default/nginx-example] - -Apply complete! Resources: 2 added, 0 changed, 0 destroyed. - -Outputs: - -lb_ip = 34.77.88.233 -``` - -Unlike in previous example, the IP address here will direct traffic -to one of the 2 pods scheduled in the cluster. - -### Updating Configuration - -As our application user-base grows we might need more instances to be scheduled. -The easiest way to achieve this is to increase `replicas` field in the config -accordingly. - -```hcl -resource "kubernetes_deployment" "example" { - metadata { - #... - } - spec { - replicas = 2 - } - template { - #... - } -} -``` - -You can verify before hitting the API that you're only changing what -you intended to change and that someone else didn't modify -the resource you created earlier. +The web server can be accessed using the public IP of the node running the Deployment. In this example, we're using minikube as the Kubernetes cluster, so the IP can be fetched using `minikube ip`. ``` -$ terraform plan - -Refreshing Terraform state in-memory prior to plan... -The refreshed state will be used to calculate this plan, but will not be -persisted to local or remote state storage. - -kubernetes_deployment.nginx: Refreshing state... (ID: default/scalable-nginx-example) -kubernetes_service.nginx: Refreshing state... (ID: default/nginx-example) - -The Terraform execution plan has been generated and is shown below. -Resources are shown in alphabetical order for quick scanning. Green resources -will be created (or destroyed and then created if an existing resource -exists), yellow resources are being changed in-place, and red resources -will be destroyed. Cyan entries are data sources to be read. - -Note: You didn't specify an "-out" parameter to save this plan, so when -"apply" is called, Terraform can't guarantee this is what will execute. - - ~ kubernetes_deployment.nginx - spec.0.replicas: "2" => "5" - - -Plan: 0 to add, 1 to change, 0 to destroy. -``` - -As we're happy with the proposed plan, we can just apply that change. - -``` -$ terraform apply -``` - -and 3 more replicas will be scheduled & attached to the load balancer. - -## Bonus: Managing Quotas and Limits - -As an operator managing cluster you're likely also responsible for -using the cluster responsibly and fairly within teams. - -Resource Quotas and Limit Ranges both offer ways to put constraints -in place around CPU, memory, disk space and other resources that -will be consumed by cluster users. - -Resource Quota can constrain the whole namespace - -```hcl -resource "kubernetes_resource_quota" "example" { - metadata { - name = "terraform-example" - } - spec { - hard = { - pods = 10 +$ curl $(minikube ip):30201 + + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + ``` -whereas Limit Range can impose limits on a specific resource -type (e.g. Pod or Persistent Volume Claim). +Alternatively, look for the hostIP associated with a running Nginx pod and combine it with the NodePort to assemble the URL: -```hcl -resource "kubernetes_limit_range" "example" { - metadata { - name = "terraform-example" - } - spec { - limit { - type = "Pod" - max = { - cpu = "200m" - memory = "1024M" - } - } - limit { - type = "PersistentVolumeClaim" - min = { - storage = "24M" - } - } - limit { - type = "Container" - default = { - cpu = "50m" - memory = "24M" - } - } - } -} ``` +$ kubectl get pod nginx-86c669bff4-zgjkv -n nginx -o json |jq .status.hostIP +"192.168.39.189" -``` -$ terraform plan -``` +$ kube get services -n nginx +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +nginx NodePort 10.109.205.23 80:30201/TCP 19m +$ curl 192.168.39.189:30201 ``` -$ terraform apply -``` - -## Conclusion - -Terraform offers you an effective way to manage both compute for -your Kubernetes cluster and Kubernetes resources. Check out -the extensive documentation of the Kubernetes provider linked -from the menu. diff --git a/website/docs/guides/v2-upgrade-guide.markdown b/website/docs/guides/v2-upgrade-guide.markdown index 33bd9c5d17..c4ba05f84a 100644 --- a/website/docs/guides/v2-upgrade-guide.markdown +++ b/website/docs/guides/v2-upgrade-guide.markdown @@ -9,12 +9,156 @@ description: |- This guide covers the changes introduced in v2.0.0 of the Kubernetes provider and what you may need to do to upgrade your configuration. -## Installing and testing this update - Use `terraform init` to install version 2 of the provider. Then run `terraform plan` to determine if the upgrade will affect any existing resources. Some resources will have updated defaults and may be modified as a result. To opt out of this change, see the guide below and update your Terraform config file to match the existing resource settings (for example, set `automount_service_account_token=false`). Then run `terraform plan` again to ensure no resource updates will be applied. NOTE: Even if there are no resource updates to apply, you may need to run `terraform refresh` to update your state to the newest version. Otherwise, some commands might fail with `Error: missing expected {`. +## Installing and testing this update + +The `required_providers` block can be used to move between version 1.x and version 2.x of the Kubernetes provider, for testing purposes. Please note that this is only possible using `terraform plan`. Once you run `terraform apply` or `terraform refresh`, the changes to Terraform State become permanent, and rolling back is no longer an option. It may be possible to roll back the State by making a copy of `.terraform.tfstate` before running `apply` or `refresh`, but this configuration is unsupported. + +### Using required_providers to test the update + +The version of the Kubernetes provider can be controlled using the `required_providers` block: + +```hcl +terraform { + required_providers { + kubernetes = { + source = "hashicorp/kubernetes" + version = ">= 2.0" + } + } +} +``` + +When the above code is in place, run `terraform init` to upgrade the provider version. + +``` +$ terraform init -upgrade +``` + +Ensure you have a valid provider block for 2.0 before proceeding with the `terraform plan` below. In version 2.0 of the provider, [provider configuration is now required](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs). A quick way to get up and running with the new provider configuration is to set `KUBE_CONFIG_PATH` to point to your existing kubeconfig. + +``` +export KUBE_CONFIG_PATH=$KUBECONFIG +``` + +Then run `terraform plan` to see what changes will be applied. This example shows the specific fields that would have been modified, and their effect on the resources, such as replacement or an in-place update. Some output is omitted for clarity. + +``` +$ export KUBE_CONFIG_PATH=$KUBECONFIG +$ terraform plan + +kubernetes_pod.test: Refreshing state... [id=default/test] +kubernetes_job.test: Refreshing state... [id=default/test] +kubernetes_stateful_set.test: Refreshing state... [id=default/test] +kubernetes_deployment.test: Refreshing state... [id=default/test] +kubernetes_daemonset.test: Refreshing state... [id=default/test] +kubernetes_cron_job.test: Refreshing state... [id=default/test] + +An execution plan has been generated and is shown below. +Resource actions are indicated with the following symbols: + ~ update in-place +-/+ destroy and then create replacement + +Terraform will perform the following actions: + + # kubernetes_cron_job.test must be replaced +-/+ resource "kubernetes_cron_job" "test" { + ~ enable_service_links = false -> true # forces replacement + + # kubernetes_daemonset.test will be updated in-place + ~ resource "kubernetes_daemonset" "test" { + + wait_for_rollout = true + ~ template { + ~ spec { + ~ enable_service_links = false -> true + + # kubernetes_deployment.test will be updated in-place + ~ resource "kubernetes_deployment" "test" { + ~ spec { + ~ enable_service_links = false -> true + + # kubernetes_job.test must be replaced +-/+ resource "kubernetes_job" "test" { + ~ enable_service_links = false -> true # forces replacement + + # kubernetes_stateful_set.test will be updated in-place + ~ resource "kubernetes_stateful_set" "test" { + ~ spec { + ~ enable_service_links = false -> true + +Plan: 2 to add, 3 to change, 2 to destroy. +``` + +Using the output from `terraform plan`, you can make modifications to your existing Terraform config, to avoid any unwanted resource changes. For example, in the above config, adding `enable_service_links = false` to the resources would prevent any changes from occurring to the existing resources. + +#### Known limitation: Pod data sources need manual upgrade + +During `terraform plan`, you might encounter the error below: + +``` +Error: .spec[0].container[0].resources[0].limits: missing expected { +``` + +This ocurrs when a Pod data source is present during upgrade. To work around this error, remove the data source from state and try the plan again. + +``` +$ terraform state rm data.kubernetes_pod.test +Removed data.kubernetes_pod.test +Successfully removed 1 resource instance(s). + +$ terraform plan +``` + +The data source will automatically be added back to state with data from the upgraded schema. + +### Rolling back to version 1.x + +If you've run the above upgrade and plan, but you don't want to proceed with the 2.0 upgrade, you can roll back using the following steps. NOTE: this will only work if you haven't run `terraform apply` or `terraform refresh` while testing version 2 of the provider. + +``` +$ terraform version +Terraform v0.14.4 ++ provider registry.terraform.io/hashicorp/kubernetes v2.0 +``` + +Set the provider version back to 1.x. + +``` +terraform { + required_providers { + kubernetes = { + source = "hashicorp/kubernetes" + version = "1.13" + } + } +} +``` + +Then run `terraform init -upgrade` to install the old provider version. + +``` +$ terraform init -upgrade + +Initializing the backend... + +Initializing provider plugins... +- Finding hashicorp/kubernetes versions matching "1.13.0"... +- Installing hashicorp/kubernetes v1.13.0... +- Installed hashicorp/kubernetes v1.13.0 (signed by HashiCorp) +``` + +The provider is now downgraded. + +``` +$ terraform version +Terraform v0.14.4 ++ provider registry.terraform.io/hashicorp/kubernetes v1.13.0 +``` + + ## Changes in v2.0.0 ### Changes to Kubernetes credentials supplied in the provider block diff --git a/website/docs/index.html.markdown b/website/docs/index.html.markdown index 84807259d1..dfee5b117a 100644 --- a/website/docs/index.html.markdown +++ b/website/docs/index.html.markdown @@ -43,9 +43,12 @@ Terraform providers for various cloud providers feature resources to spin up man To use these credentials with the Kubernetes provider, they can be interpolated into the respective attributes of the Kubernetes provider configuration block. -~> **WARNING** When using interpolation to pass credentials to the Kubernetes provider from other resources, these resources SHOULD NOT be created in the same `apply` operation where Kubernetes provider resources are also used. This will lead to intermittent and unpredictable errors which are hard to debug and diagnose. The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources. Please refer to [this section of Terraform docs](https://www.terraform.io/docs/configuration/providers.html#provider-configuration) for further explanation. +~> **WARNING** When using interpolation to pass credentials to the Kubernetes provider from other resources, these resources SHOULD NOT be created in the same Terraform module where Kubernetes provider resources are also used. This will lead to intermittent and unpredictable errors which are hard to debug and diagnose. The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources. Please refer to [this section of Terraform docs](https://www.terraform.io/docs/configuration/providers.html#provider-configuration) for further explanation. + +The most reliable way to configure the Kubernetes provider is to ensure that the cluster itself and the Kubernetes provider resources can be managed with separate `apply` operations. Data-sources can be used to convey values between the two stages as needed. + +For specific usage examples, see the guides for [AKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/aks/README.md), [EKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/eks/README.md), and [GKE](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/gke/README.md). -The best-practice in this case is to ensure that the cluster itself and the Kubernetes provider resources are managed with separate `apply` operations. Data-sources can be used to convey values between the two stages as needed. ## Authentication @@ -111,6 +114,25 @@ provider "kubernetes" { ~> If you have **both** valid configurations in a config file and static configuration, the static one is used as an override. i.e. any static field will override its counterpart loaded from the config. +## Exec-based credential plugins + +Some cloud providers have short-lived authentication tokens that can expire relatively quickly. To ensure the Kubernetes provider is receiving valid credentials, an exec-based plugin can be used to fetch a new token before initializing the provider. For example, on EKS, the command `eks get-token` can be used: + +```hcl +provider "kubernetes" { + host = var.cluster_endpoint + cluster_ca_certificate = base64decode(var.cluster_ca_cert) + exec { + api_version = "client.authentication.k8s.io/v1alpha1" + args = ["eks", "get-token", "--cluster-name", var.cluster_name] + command = "aws" + } +} +``` + +For further reading, see these examples which demonstrate different approaches to keeping the cluster credentials up to date: [AKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/aks/README.md), [EKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/eks/README.md), and [GKE](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/gke/README.md). + + ## Argument Reference The following arguments are supported: