Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EC stack update fails if there are configuration changes that are not in Terraform state #773

Open
4 tasks done
mmfernandespmg opened this issue Jan 18, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@mmfernandespmg
Copy link

mmfernandespmg commented Jan 18, 2024

Readiness Checklist

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I am reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Terraform Apply does the EC stack update independently of the changes on the stack configuration.

Current Behavior

When the Elastic Stack associated with the ec_deploymentresource has an update pending and we run Terraform Apply, the provider fails to apply the update from time to time when the instance_configuration_id attribute changed on EC (due to autoscaling) and the provider tries to change its value at the same time as the EC stack version update. This will fail because currently, it is not possible to do cluster configuration changes at the same time as we do an EC stack version update.
The error thrown is:

╷
│ Error: failed updating deployment
│ 
│   with module.elasticcloud.ec_deployment.elasticcloud,
│   on modules/elasticcloud/main.tf line 110, in resource "ec_deployment" "elasticcloud":
│  110: resource "ec_deployment" "elasticcloud" ***
│ 
│ api error: 1 error occurred:
│ 	* clusters.topology_and_version_change.prohibited: You must perform a
│ version upgrade separately from changes to the cluster topology (memory,
│ number of zones, dedicated master nodes, etc). The following topology
│ changes have been detected: instance_configuration_id changed in
│ `gcp.es.datafrozen.n2.68x10x90`, instance_configuration_id changed in
│ `gcp.es.ml.n2.68x32x45` (resources.elasticsearch[0])
│ 
│ 
╵
Error: Process completed with exit code 1.

## Terraform definition

resource "ec_deployment" "elasticcloud" {
  name = "ec-deployment"

  region                 = "gcp-us-central1"
  version                = "8.12.0"
  deployment_template_id = "gcp-general-purpose"
  elasticsearch = {
    autoscale = true

    cold = {
      autoscaling = {}
    }

    frozen = {
      autoscaling = {}
    }

    hot = {
      size = "1g"
      autoscaling = {
        max_size          = "8g"
        max_size_resource = "memory"
      } 
    }

    ml = {
      autoscaling = {}
    }

    warm = {
      autoscaling = {}
    }
  }

  kibana = {
    size          = "1g"
    size_resource = "memory"
    zone_count    = 1
  }

  lifecycle {
    ignore_changes = [
      elasticsearch.hot.size
    ]
  }
}

Steps to Reproduce

  1. Run Terraform Apply to deploy an older version of EC stack
  2. Change EC stack configuration in the EC console (it can be by increasing one component stack resources)
  3. Change the EC stack to a newer version on Terraform ec_deployment resource
  4. Run Terraform Apply to update the EC stack

Context

This issue is affecting the provider's capability to update the EC stacks and it's blocking EC stack updates

Possible Solution

Your Environment

  • Version used: 0.9.0
  • Running against Elastic Cloud SaaS or Elastic Cloud Enterprise and version: Elastic Cloud SaaS
  • Environment name and version (e.g. Go 1.9): Terraform v1.3.6
  • Server type and version:
  • Operating System and version: linux_amd64
  • Link to your project:
@mmfernandespmg mmfernandespmg added the bug Something isn't working label Jan 18, 2024
@srri
Copy link

srri commented Jan 23, 2024

I have also ran into a similar issue:
The plan shows the following

  # ec_deployment.prod_deployment[0] will be updated in-place
  ~ resource "ec_deployment" "prod_deployment" {
        id                     = "<redacted>"
        name                   = "<redacted>"
      ~ version                = "8.11.4" -> "8.12.0"
        # (8 unchanged attributes hidden)
    }

Just trying to update to 8.12.0, but then the apply fails with:

api error: 1 error occurred:
│ 	* clusters.topology_and_version_change.prohibited: You must perform a
│ version upgrade separately from changes to the cluster topology (memory,
│ number of zones, dedicated master nodes, etc). The following topology
│ changes have been detected: autoscaling_max changed in
│ `aws.es.datawarm.d3`, instance_configuration_id changed in
│ `aws.es.datawarm.d3`, instance_configuration_version changed in
│ `aws.es.datawarm.d3`, autoscaling_max changed in `aws.es.datacold.d3`,
│ instance_configuration_id changed in `aws.es.datacold.d3`,
│ instance_configuration_version changed in `aws.es.datacold.d3`
│ (resources.elasticsearch[0])

My ec_deployment doesn't even contain a warm or cold topology field:

resource "ec_deployment" "prod_deployment" {
  count = var.profile == "fpjs_prod" ? 1 : 0
  name  = "<redacted>"
  alias = "<redacted>"

  region                 = var.deployment_region
  deployment_template_id = "aws-storage-optimized-dense"
  version                = "8.12.0"

  elasticsearch = {

    frozen = {
      zone_count = var.elasticsearch_frozen_zone_count
      autoscaling = {
        max_size = "60g"
        max_size_resource = "memory"
      }
      size = "4g"
      size_resource = "memory"
    }

    hot = {
      zone_count = var.elasticsearch_hot_zone_count
      autoscaling = {
        max_storage = var.elasticsearch_hot_max_data_storage_gb
      }
    }

  }

  kibana = {
    topology = {
      instance_configuration_id = var.kibana_node_instance_type
      size                      = var.kibana_node_size
      zone_count                = var.kibana_zone_count
    }

    ref_id = "main-kibana"
  }
}

Main difference is I don't think I actually changed anything in the console, and the failure doesn't show what the expected values should be (so that I can either change the console back, or add it to the terraform)

@srri
Copy link

srri commented Jan 24, 2024

I was able to resolve this by defining the missing autoscaling definitions in my terraform, but this wasn't previously necessary (they have never been there, and I've applied other changes)

@zephyros-dev
Copy link

zephyros-dev commented Nov 26, 2024

I was able to resolve this by defining the missing autoscaling definitions in my terraform, but this wasn't previously necessary (they have never been there, and I've applied other changes)

Can you clarify how you add the missing autoscaling changes? I tried adding empty block of other topology but that doesn't seems to help. Thanks

    cold = {
      autoscaling = {}
    }

    frozen = {
      autoscaling = {}
    }

    ml = {
      autoscaling = {}
    }

    warm = {
      autoscaling = {}
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants