We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform apply
Here's a sample:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place -/+ destroy and then create replacement Terraform will perform the following actions: # local_file.kubeconfig must be replaced -/+ resource "local_file" "kubeconfig" { ~ content = (sensitive) -> (sensitive) # forces replacement ~ id = "a0d6ca2b35a8f4b9324e86a42c11a0bb1041227c" -> (known after apply) # (3 unchanged attributes hidden) } # module.rke_vsphere.rke_cluster.cluster will be updated in-place ~ resource "rke_cluster" "cluster" { id = "7d30063d-14ae-4191-a445-23b6920f63ea" ~ kube_config_yaml = (sensitive value) ~ rke_cluster_yaml = (sensitive value) ~ rke_state = (sensitive value) # (24 unchanged attributes hidden) ~ ingress { - http_port = 80 -> null - https_port = 443 -> null - network_mode = "hostPort" -> null # (5 unchanged attributes hidden) } # (6 unchanged blocks hidden) } Plan: 1 to add, 1 to change, 1 to destroy.
Here's my cluster code:
terraform { required_providers { vsphere = { source = "hashicorp/vsphere" version = "2.1.1" } template = { source = "hashicorp/template" version = "2.2.0" } rke = { source = "rancher/rke" version = "1.3.0" } } } ... # creates the nodes module "nodes" { source = "./nodes" vsphere_config = local.vsphere_config cluster_nodes = local.cluster_config["cluster_nodes"] } resource "rke_cluster" "cluster" { cluster_name = local.cluster_config["cluster_name"] cloud_provider { name = "vsphere" vsphere_cloud_provider { global { insecure_flag = true } virtual_center { datacenters = local.vsphere_config["datacenter"] name = local.vsphere_config["host"] user = local.vsphere_config["username"] password = local.vsphere_config["password"] port = local.vsphere_config["port"] } workspace { datacenter = local.vsphere_config["datacenter"] server = local.vsphere_config["host"] default_datastore = local.vsphere_config["datastore"] folder = "vm/${local.vsphere_config["folder"]}" } } } authentication { strategy = "x509" } authorization { mode = "rbac" } network { plugin = "flannel" } ingress { provider = "none" } kubernetes_version = local.cluster_config["kubernetes_version"] dynamic "nodes" { for_each = module.nodes.nodes_ips iterator = nodeip content { address = nodeip.value["ip_address"] internal_address = nodeip.value["ip_address"] hostname_override = nodeip.value["name"] user = module.nodes.ssh_username role = local.cluster_config["cluster_nodes"][nodeip.key] == "c" ? ["controlplane", "etcd"] : (local.cluster_config["cluster_nodes"][nodeip.key] == "w" ? ["worker"] : ["controlplane", "etcd", "worker" ]) ssh_key = module.nodes.ssh_private_key } } addons = templatefile("${path.module}/files/storageclass.yaml", {}) }
The text was updated successfully, but these errors were encountered:
Yep we get the same problem, but with the ec2 flavor of the provider.
Sorry, something went wrong.
It seems like caused by ingress provider = "none" when I use
ingress { provider = "none" http_port = 80 https_port = 443 network_mode = "hostPort" }
And then no changes.
No branches or pull requests
Here's a sample:
Here's my cluster code:
The text was updated successfully, but these errors were encountered: