Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NKE cluster with multiple node pools are always force replaced #647

Open
olivierboudet opened this issue Nov 6, 2023 · 2 comments
Open

Comments

@olivierboudet
Copy link

Nutanix Cluster Information

  • PC version pc.2022.6.0.2
  • PE 6.5.2 LTS
  • NKE 2.8.0 + ntnx-1.5 (k8s 1.25)
  • AOS 6.5.2

Terraform Version

Terraform v1.6.3
on linux_amd64
+ provider registry.terraform.io/nutanix/nutanix v1.9.1

Affected Resource(s)

  • nutanix_karbon_cluster

Terraform Configuration Files

resource "nutanix_karbon_cluster" "mycluster" {
  name       = "mycluster"
  version    = "1.25.6-0"
  storage_class_config {
    reclaim_policy = "Retain"
    volumes_config {
      file_system                = "ext4"
      flash_mode                 = true
      password                   = var.nutanix_password
      prism_element_cluster_uuid = "myuuid"
      storage_container          = "NutanixKubernetesEngine"
      username                   = var.nutanix_user
    }
  }
  cni_config {
    node_cidr_mask_size = 24
    pod_ipv4_cidr       = "172.20.0.0/16"
    service_ipv4_cidr   = "172.19.0.0/16"
  }
  worker_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 10
      memory_mib = 16384
      network_uuid               = nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "myuuid"
    }
  }

  etcd_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 4
      memory_mib = 8192
      network_uuid               =  nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "myuuid"
    }
  }
  master_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 4
      memory_mib = 4096
      network_uuid               =  nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "myuuid"
    }
  }
  private_registry {
    registry_name = nutanix_karbon_private_registry.registry.name
  }
}

resource "nutanix_karbon_worker_nodepool" "mynodepool" {
  cluster_name = nutanix_karbon_cluster.mycluster.name
  name = "mynodepool"
  num_instances = 1
  node_os_version = "ntnx-1.5"
  
  ahv_config {
    cpu = 2
    memory_mib = 8192
    network_uuid               = nutanix_subnet.kubernetes.id
    prism_element_cluster_uuid = "myuuid"
  }

  labels={
    partner="mypartner"
  }

}

Debug Output

Panic Output

Expected Behavior

I expect running terraform apply twice should not detect any changes.

Actual Behavior

The first command terraform apply creates the cluster and the nodepool.
Running the command terraform apply a second time forces a replacement :


Terraform will perform the following actions:

  # nutanix_karbon_cluster.mycluster must be replaced
-/+ resource "nutanix_karbon_cluster" "mycluster" {
      ~ deployment_type             = "single-master" -> (known after apply)
      ~ id                          = "myuuid" -> (known after apply)
      ~ kubeapi_server_ipv4_address = "xxx.xxx.xx.x" -> (known after apply)
        name                        = "mycluster"
      ~ status                      = "kActive" -> (known after apply)
        # (2 unchanged attributes hidden)

      ~ etcd_node_pool {
            name            = "etcd-node-pool"
          ~ nodes           = [
              - {
                  - hostname     = "xxxxxxxx-etcd-0"
                  - ipv4_address = "xxx.xxx.xx.x"
                },
            ] -> (known after apply)
            # (2 unchanged attributes hidden)

            # (1 unchanged block hidden)
        }

      ~ master_node_pool {
            name            = "master-node-pool"
          ~ nodes           = [
              - {
                  - hostname     = "xxxxxx-master-0"
                  - ipv4_address = "xxx.xxx.xx.x"
                },
            ] -> (known after apply)
            # (2 unchanged attributes hidden)

            # (1 unchanged block hidden)
        }

      ~ worker_node_pool {
            name            = "worker-node-pool"
          ~ nodes           = [
              - {
                  - hostname     = "xxxxxxx-worker-0"
                  - ipv4_address = "xxx.xxx.xx.x"
                },
            ] -> (known after apply)
            # (2 unchanged attributes hidden)

            # (1 unchanged block hidden)
        }
      - worker_node_pool {
        }

        # (3 unchanged blocks hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Steps to Reproduce

  1. terraform apply
  2. terraform apply

Important Factors

References

@abhimutant
Copy link
Collaborator

IIUC, nutanix_karbon_cluster creates 1 worker_node_pool and nutanix_karbon_worker_nodepool creates another worker_node_pool . when we run 2nd terraform apply, it sees a change in worker_node_pool as there is only 1 worker_node_pool in tfconfig for nutanix_karbon_cluster resource and in addition, we have added second worker_node_pool using nutanix_karbon_worker_nodepool resource to the existing infra, so terraform plans detects a change as now there will be 2 worker_nodes_pool present in cluster. hence it is replacing.
you can use lifecylce to ignore changes in nutanix_karbon_cluster for node_pool.
lifecycle { ignore_changes = [ worker_node_pool, ] }

@olivierboudet
Copy link
Author

olivierboudet commented Nov 14, 2023

if I add lifecycle { ignore_changes = [ worker_node_pool, ] } terraform does not see changes anymore each time I run apply.
But it does not allow to add new node pools after the creation of the cluster. If I try to add a new node pool, I have another error :


│ Error: unable to expand node pool during flattening: nodepool name must be passed
│
│   with nutanix_karbon_cluster.mycluster,
│   on nke.tf line 1, in resource "nutanix_karbon_cluster" "mycluster":
│    1: resource "nutanix_karbon_cluster" "mycluster" {
│

I am just adding a new nodepool:

resource "nutanix_karbon_worker_nodepool" "test" {
  cluster_name = nutanix_karbon_cluster.mycluster.name
  name = "test"
  num_instances = 1
  node_os_version = "ntnx-1.5"

  ahv_config {
    cpu = 2
    memory_mib = 8192
    network_uuid               = nutanix_subnet.kubernetes.id
    prism_element_cluster_uuid = "0005f937-ba91-1d58-3f59-00620b61f748"
  }

  labels={
    partner="test"
  }
}

It looks like, after the creation of the cluster we can not change its node configuration at all...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants