80:30201/TCP 30s
-Plan: 2 to add, 0 to change, 0 to destroy.
-
-------------------------------------------------------------------------
-# ...
+NAME READY UP-TO-DATE AVAILABLE AGE
+deployment.apps/nginx 2/2 2 2 38s
+NAME DESIRED CURRENT READY AGE
+replicaset.apps/nginx-86c669bff4 2 2 2 38s
```
-```
-$ terraform apply -auto-approve
-kubernetes_deployment.nginx: Creating...
-kubernetes_deployment.nginx: Creation complete after 10s [id=default/scalable-nginx-example]
-kubernetes_service.nginx: Creating...
-kubernetes_service.nginx: Still creating... [10s elapsed]
-kubernetes_service.nginx: Still creating... [20s elapsed]
-kubernetes_service.nginx: Still creating... [30s elapsed]
-kubernetes_service.nginx: Still creating... [40s elapsed]
-kubernetes_service.nginx: Still creating... [50s elapsed]
-kubernetes_service.nginx: Creation complete after 59s [id=default/nginx-example]
-
-Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
-
-Outputs:
-
-lb_ip = 34.77.88.233
-```
-
-Unlike in previous example, the IP address here will direct traffic
-to one of the 2 pods scheduled in the cluster.
-
-### Updating Configuration
-
-As our application user-base grows we might need more instances to be scheduled.
-The easiest way to achieve this is to increase `replicas` field in the config
-accordingly.
-
-```hcl
-resource "kubernetes_deployment" "example" {
- metadata {
- #...
- }
- spec {
- replicas = 2
- }
- template {
- #...
- }
-}
-```
-
-You can verify before hitting the API that you're only changing what
-you intended to change and that someone else didn't modify
-the resource you created earlier.
+The web server can be accessed using the public IP of the node running the Deployment. In this example, we're using minikube as the Kubernetes cluster, so the IP can be fetched using `minikube ip`.
```
-$ terraform plan
-
-Refreshing Terraform state in-memory prior to plan...
-The refreshed state will be used to calculate this plan, but will not be
-persisted to local or remote state storage.
-
-kubernetes_deployment.nginx: Refreshing state... (ID: default/scalable-nginx-example)
-kubernetes_service.nginx: Refreshing state... (ID: default/nginx-example)
-
-The Terraform execution plan has been generated and is shown below.
-Resources are shown in alphabetical order for quick scanning. Green resources
-will be created (or destroyed and then created if an existing resource
-exists), yellow resources are being changed in-place, and red resources
-will be destroyed. Cyan entries are data sources to be read.
-
-Note: You didn't specify an "-out" parameter to save this plan, so when
-"apply" is called, Terraform can't guarantee this is what will execute.
-
- ~ kubernetes_deployment.nginx
- spec.0.replicas: "2" => "5"
-
-
-Plan: 0 to add, 1 to change, 0 to destroy.
-```
-
-As we're happy with the proposed plan, we can just apply that change.
-
-```
-$ terraform apply
-```
-
-and 3 more replicas will be scheduled & attached to the load balancer.
-
-## Bonus: Managing Quotas and Limits
-
-As an operator managing cluster you're likely also responsible for
-using the cluster responsibly and fairly within teams.
-
-Resource Quotas and Limit Ranges both offer ways to put constraints
-in place around CPU, memory, disk space and other resources that
-will be consumed by cluster users.
-
-Resource Quota can constrain the whole namespace
-
-```hcl
-resource "kubernetes_resource_quota" "example" {
- metadata {
- name = "terraform-example"
- }
- spec {
- hard = {
- pods = 10
+$ curl $(minikube ip):30201
+
+
+
+
+Welcome to nginx!
+
+
+
+Welcome to nginx!
+If you see this page, the nginx web server is successfully installed and
+working. Further configuration is required.
+
+For online documentation and support please refer to
+nginx.org.
+Commercial support is available at
+nginx.com.
+
+Thank you for using nginx.
+
+
```
-whereas Limit Range can impose limits on a specific resource
-type (e.g. Pod or Persistent Volume Claim).
+Alternatively, look for the hostIP associated with a running Nginx pod and combine it with the NodePort to assemble the URL:
-```hcl
-resource "kubernetes_limit_range" "example" {
- metadata {
- name = "terraform-example"
- }
- spec {
- limit {
- type = "Pod"
- max = {
- cpu = "200m"
- memory = "1024M"
- }
- }
- limit {
- type = "PersistentVolumeClaim"
- min = {
- storage = "24M"
- }
- }
- limit {
- type = "Container"
- default = {
- cpu = "50m"
- memory = "24M"
- }
- }
- }
-}
```
+$ kubectl get pod nginx-86c669bff4-zgjkv -n nginx -o json |jq .status.hostIP
+"192.168.39.189"
-```
-$ terraform plan
-```
+$ kube get services -n nginx
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+nginx NodePort 10.109.205.23 80:30201/TCP 19m
+$ curl 192.168.39.189:30201
```
-$ terraform apply
-```
-
-## Conclusion
-
-Terraform offers you an effective way to manage both compute for
-your Kubernetes cluster and Kubernetes resources. Check out
-the extensive documentation of the Kubernetes provider linked
-from the menu.
diff --git a/website/docs/guides/v2-upgrade-guide.markdown b/website/docs/guides/v2-upgrade-guide.markdown
index 33bd9c5d17..c4ba05f84a 100644
--- a/website/docs/guides/v2-upgrade-guide.markdown
+++ b/website/docs/guides/v2-upgrade-guide.markdown
@@ -9,12 +9,156 @@ description: |-
This guide covers the changes introduced in v2.0.0 of the Kubernetes provider and what you may need to do to upgrade your configuration.
-## Installing and testing this update
-
Use `terraform init` to install version 2 of the provider. Then run `terraform plan` to determine if the upgrade will affect any existing resources. Some resources will have updated defaults and may be modified as a result. To opt out of this change, see the guide below and update your Terraform config file to match the existing resource settings (for example, set `automount_service_account_token=false`). Then run `terraform plan` again to ensure no resource updates will be applied.
NOTE: Even if there are no resource updates to apply, you may need to run `terraform refresh` to update your state to the newest version. Otherwise, some commands might fail with `Error: missing expected {`.
+## Installing and testing this update
+
+The `required_providers` block can be used to move between version 1.x and version 2.x of the Kubernetes provider, for testing purposes. Please note that this is only possible using `terraform plan`. Once you run `terraform apply` or `terraform refresh`, the changes to Terraform State become permanent, and rolling back is no longer an option. It may be possible to roll back the State by making a copy of `.terraform.tfstate` before running `apply` or `refresh`, but this configuration is unsupported.
+
+### Using required_providers to test the update
+
+The version of the Kubernetes provider can be controlled using the `required_providers` block:
+
+```hcl
+terraform {
+ required_providers {
+ kubernetes = {
+ source = "hashicorp/kubernetes"
+ version = ">= 2.0"
+ }
+ }
+}
+```
+
+When the above code is in place, run `terraform init` to upgrade the provider version.
+
+```
+$ terraform init -upgrade
+```
+
+Ensure you have a valid provider block for 2.0 before proceeding with the `terraform plan` below. In version 2.0 of the provider, [provider configuration is now required](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs). A quick way to get up and running with the new provider configuration is to set `KUBE_CONFIG_PATH` to point to your existing kubeconfig.
+
+```
+export KUBE_CONFIG_PATH=$KUBECONFIG
+```
+
+Then run `terraform plan` to see what changes will be applied. This example shows the specific fields that would have been modified, and their effect on the resources, such as replacement or an in-place update. Some output is omitted for clarity.
+
+```
+$ export KUBE_CONFIG_PATH=$KUBECONFIG
+$ terraform plan
+
+kubernetes_pod.test: Refreshing state... [id=default/test]
+kubernetes_job.test: Refreshing state... [id=default/test]
+kubernetes_stateful_set.test: Refreshing state... [id=default/test]
+kubernetes_deployment.test: Refreshing state... [id=default/test]
+kubernetes_daemonset.test: Refreshing state... [id=default/test]
+kubernetes_cron_job.test: Refreshing state... [id=default/test]
+
+An execution plan has been generated and is shown below.
+Resource actions are indicated with the following symbols:
+ ~ update in-place
+-/+ destroy and then create replacement
+
+Terraform will perform the following actions:
+
+ # kubernetes_cron_job.test must be replaced
+-/+ resource "kubernetes_cron_job" "test" {
+ ~ enable_service_links = false -> true # forces replacement
+
+ # kubernetes_daemonset.test will be updated in-place
+ ~ resource "kubernetes_daemonset" "test" {
+ + wait_for_rollout = true
+ ~ template {
+ ~ spec {
+ ~ enable_service_links = false -> true
+
+ # kubernetes_deployment.test will be updated in-place
+ ~ resource "kubernetes_deployment" "test" {
+ ~ spec {
+ ~ enable_service_links = false -> true
+
+ # kubernetes_job.test must be replaced
+-/+ resource "kubernetes_job" "test" {
+ ~ enable_service_links = false -> true # forces replacement
+
+ # kubernetes_stateful_set.test will be updated in-place
+ ~ resource "kubernetes_stateful_set" "test" {
+ ~ spec {
+ ~ enable_service_links = false -> true
+
+Plan: 2 to add, 3 to change, 2 to destroy.
+```
+
+Using the output from `terraform plan`, you can make modifications to your existing Terraform config, to avoid any unwanted resource changes. For example, in the above config, adding `enable_service_links = false` to the resources would prevent any changes from occurring to the existing resources.
+
+#### Known limitation: Pod data sources need manual upgrade
+
+During `terraform plan`, you might encounter the error below:
+
+```
+Error: .spec[0].container[0].resources[0].limits: missing expected {
+```
+
+This ocurrs when a Pod data source is present during upgrade. To work around this error, remove the data source from state and try the plan again.
+
+```
+$ terraform state rm data.kubernetes_pod.test
+Removed data.kubernetes_pod.test
+Successfully removed 1 resource instance(s).
+
+$ terraform plan
+```
+
+The data source will automatically be added back to state with data from the upgraded schema.
+
+### Rolling back to version 1.x
+
+If you've run the above upgrade and plan, but you don't want to proceed with the 2.0 upgrade, you can roll back using the following steps. NOTE: this will only work if you haven't run `terraform apply` or `terraform refresh` while testing version 2 of the provider.
+
+```
+$ terraform version
+Terraform v0.14.4
++ provider registry.terraform.io/hashicorp/kubernetes v2.0
+```
+
+Set the provider version back to 1.x.
+
+```
+terraform {
+ required_providers {
+ kubernetes = {
+ source = "hashicorp/kubernetes"
+ version = "1.13"
+ }
+ }
+}
+```
+
+Then run `terraform init -upgrade` to install the old provider version.
+
+```
+$ terraform init -upgrade
+
+Initializing the backend...
+
+Initializing provider plugins...
+- Finding hashicorp/kubernetes versions matching "1.13.0"...
+- Installing hashicorp/kubernetes v1.13.0...
+- Installed hashicorp/kubernetes v1.13.0 (signed by HashiCorp)
+```
+
+The provider is now downgraded.
+
+```
+$ terraform version
+Terraform v0.14.4
++ provider registry.terraform.io/hashicorp/kubernetes v1.13.0
+```
+
+
## Changes in v2.0.0
### Changes to Kubernetes credentials supplied in the provider block
diff --git a/website/docs/index.html.markdown b/website/docs/index.html.markdown
index 84807259d1..dfee5b117a 100644
--- a/website/docs/index.html.markdown
+++ b/website/docs/index.html.markdown
@@ -43,9 +43,12 @@ Terraform providers for various cloud providers feature resources to spin up man
To use these credentials with the Kubernetes provider, they can be interpolated into the respective attributes of the Kubernetes provider configuration block.
-~> **WARNING** When using interpolation to pass credentials to the Kubernetes provider from other resources, these resources SHOULD NOT be created in the same `apply` operation where Kubernetes provider resources are also used. This will lead to intermittent and unpredictable errors which are hard to debug and diagnose. The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources. Please refer to [this section of Terraform docs](https://www.terraform.io/docs/configuration/providers.html#provider-configuration) for further explanation.
+~> **WARNING** When using interpolation to pass credentials to the Kubernetes provider from other resources, these resources SHOULD NOT be created in the same Terraform module where Kubernetes provider resources are also used. This will lead to intermittent and unpredictable errors which are hard to debug and diagnose. The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources. Please refer to [this section of Terraform docs](https://www.terraform.io/docs/configuration/providers.html#provider-configuration) for further explanation.
+
+The most reliable way to configure the Kubernetes provider is to ensure that the cluster itself and the Kubernetes provider resources can be managed with separate `apply` operations. Data-sources can be used to convey values between the two stages as needed.
+
+For specific usage examples, see the guides for [AKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/aks/README.md), [EKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/eks/README.md), and [GKE](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/gke/README.md).
-The best-practice in this case is to ensure that the cluster itself and the Kubernetes provider resources are managed with separate `apply` operations. Data-sources can be used to convey values between the two stages as needed.
## Authentication
@@ -111,6 +114,25 @@ provider "kubernetes" {
~> If you have **both** valid configurations in a config file and static configuration, the static one is used as an override.
i.e. any static field will override its counterpart loaded from the config.
+## Exec-based credential plugins
+
+Some cloud providers have short-lived authentication tokens that can expire relatively quickly. To ensure the Kubernetes provider is receiving valid credentials, an exec-based plugin can be used to fetch a new token before initializing the provider. For example, on EKS, the command `eks get-token` can be used:
+
+```hcl
+provider "kubernetes" {
+ host = var.cluster_endpoint
+ cluster_ca_certificate = base64decode(var.cluster_ca_cert)
+ exec {
+ api_version = "client.authentication.k8s.io/v1alpha1"
+ args = ["eks", "get-token", "--cluster-name", var.cluster_name]
+ command = "aws"
+ }
+}
+```
+
+For further reading, see these examples which demonstrate different approaches to keeping the cluster credentials up to date: [AKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/aks/README.md), [EKS](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/eks/README.md), and [GKE](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/_examples/gke/README.md).
+
+
## Argument Reference
The following arguments are supported: