-
Notifications
You must be signed in to change notification settings - Fork 983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add updated examples for GKE and EKS #1115
Conversation
Added docs in a separate PR #1121 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These examples are amazing and are going to be super helpful for me! 😄
I just left a couple of things to fix:
- Better to use
>=
(or~>
) in the version constraints so we don't have to update the versions in the examples too often - There's a bunch of formatting quirks, I would just give everything a run through terraform fmt
@@ -0,0 +1,56 @@ | |||
resource "kubernetes_namespace" "test" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to run terraform fmt
on this file I think.
_examples/aks/main.tf
Outdated
} | ||
helm = { | ||
source = "hashicorp/helm" | ||
version = "2.0.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should have a =>
constraint here and in the one above since this is an example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean version = ">= 2.0.1"
? I haven't used version constraints much. You can also click "insert a suggestion" to help me figure out exactly where to put this. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes sorry that's what I mean. Just so it will pull down the latest version of the provider without us having to go back and update the doc examples all the time. This way we would only have to go back and change it when we have another major version bump.
The syntax is [here](https://www.terraform.io/docs/configuration/version-constraints.html.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the cloud providers (like azurerm), I was going to leave the versions static so we don't have to update them until we want to. Potentially it would keep working for years. For Kubernetes and Helm though, I'm ok with these auto-updating, since we'll know about any breaking changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep that makes sense!
@@ -0,0 +1,12 @@ | |||
variable "location" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also need terraform fmt
on this file
kubernetes/test-infra/eks/main.tf
Outdated
@@ -2,7 +2,7 @@ terraform { | |||
required_providers { | |||
kubernetes = { | |||
source = "hashicorp/kubernetes" | |||
version = "1.13" | |||
version = "9.9.9" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this version change on purpose or a left-over from testing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a left-over. Thanks for catching this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh oops, actually, this is in the test-infra dir, so it's on purpose. This PR should only modify _examples
, which should not have 9.9.9 in them.
In order to prevent confusion around progressive apply, provide some working examples of creating Kubernetes resources in the same apply where the cluster is created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🏆
@dak1n1 Sorry for commenting closed PR , but I'm little confused, the examples and the Terraform documentation contradicts a bit.
|
@kaloyanhs Hi, thanks for bringing this up! You're right that I should move the provider blocks into main.tf and remove them from the modules. This is from the docs you gave me:
I did run into that problem during testing, and couldn't use depends_on because of it. So that's a good reason for me to update the modules and move the provider blocks into the root module. My main goal with these examples was to come up with a configuration that works reliably, since so many people wanted to run the Kubernetes provider in a single apply with their cluster build. What I came up with works in testing, but it could use some refinement. My advice in general, regarding production configurations, is to always test in your specific environment. Especially when there are short-lived cloud credentials involved, you'll want to test that the configuration still works an hour later, after the short-lived token has expired. (They expire after about an hour in GKE, but each cloud provider is different. EKS tokens expire after 15 minutes, etc). In one of the 3 examples here, I did pass the providers into the modules as the documentation states to do, but the other two examples need updating to follow this pattern too. They still work, but it's not best practice. Regarding the warning message in the docs, I'll try to explain. Previously, we were telling everyone to apply Kubernetes resources in a separate apply from the cluster build. That's what the docs used to say. But after seeing how often people wanted to use it in a single apply, I set out to find a way to accomplish this. I found that in many cases, you can use a single apply. But there are some cases in which a single apply will fail. Specifically, it will fail if the credentials being passed into the Kubernetes provider have expired, such as in the case of rotating out client certificates in the Kubernetes cluster. For this reason, I updated the warning message in the docs to say that you'll need to keep the Kubernetes resources in a separate module, so that you have the option of using a targeted apply when needed. This one I tested out by replacing the AKS client certificates, which is something people will have to do in production for long-lived clusters: You'll also want the Kubernetes resources to be in a separate module if you ever plan to replace the Kubernetes cluster out from under existing Kubernetes resources. This will ensure the resources get re-created during the apply that creates the new Kubernetes cluster. I added a case for that at the end of each of the 3 READMEs for GKE, EKS, AKS. We do have some upcoming docs improvements planned, regarding all this authentication best practice. It's not very intuitive or even well documented at the moment, and often times users end up with outdated credentials being passed into the provider. I think for the next iteration of these examples, I'll try and add an
Any option that is passed into a provider block has the potential to cause some issues in a single apply, if any of those options become outdated. In the example above, issues could arise if the value being passed into This is why the Learn guide was recently updated to recommend the exec block too. TL;DR: two applies will always work. A single apply works in all cases except those that replace data being passed into the provider block (host, certs, token, etc). Many users want a single apply and we'll continue working on ways to support these users, but it is a large and complex issue that is outside of the Kubernetes/Helm Provider's control. So we don't currently have a fix to guarantee a single apply will work under all conditions. |
So in the above comment, I basically gave an explanation of the state of everything as I knew it... This lead to some additional testing and improvement of the AKS and GKE examples, to add the suggested changes (thanks for that!). I'm seeing some promising results with using I'm continuing to work on this in a draft PR here since it's related to some other work I had in progress. I'll push out some updates to the examples and their READMEs to demonstrate the most reliable and straightforward way to configure this. |
Add some examples of using cloud providers with version 2 of the Kubernetes and Helm providers.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
In order to prevent confusion around progressive apply, provide some working examples of creating Kubernetes resources in the same apply where the cluster is created.
These were tested using the current (master) version of the kubernetes provider, which will become version 2.0. (The required_providers
version =
attribute has been omitted until the new version is released).Description
Acceptance tests
Output from acceptance testing:
Release Note
Release note for CHANGELOG:
References
hashicorp/terraform-provider-helm#647
Community Note