You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the plan operation that it tries to execute:
Terraform will perform the following actions:# module.resilience-hub.awscc_resiliencehub_app.app will be created+resource"awscc_resiliencehub_app""app" {
+app_arn=(known after apply)
+app_assessment_schedule=(known after apply)
+app_template_body=jsonencode(
{
+ appComponents = [
+ {
+ name ="appcommon"+ resourceNames = []
+ type ="AWS::ResilienceHub::AppCommonAppComponent"
},
]
+ excludedResources = {}
+ resources = []
+ version =2
}
)
+description=(known after apply)
+drift_status=(known after apply)
+event_subscriptions=(known after apply)
+id=(known after apply)
+name="platform-utilities"+permission_model=(known after apply)
+resiliency_policy_arn=(known after apply)
+resource_mappings=[
+ {
+ eks_source_name = (known after apply)
+ logical_stack_name = (known after apply)
+ mapping_type ="Terraform"+ physical_resource_id = {
+ aws_account_id = (known after apply)
+ aws_region = (known after apply)
+ identifier ="${CORRECT_PATH_TO_STATE_FILE_YES_IT_IS_CORRECT_DONT_WORRY" + type = "Native" } + resource_name = (known after apply) + terraform_source_name = "TerraformStateFile" }, ] + tags = ${MY_TAGS_HERE_THAT_ARE_STATIC_SO_DONT_WORRY} } # module.resilience-hub.awscc_resiliencehub_resiliency_policy.policy will be created + resource "awscc_resiliencehub_resiliency_policy""policy" { + data_location_constraint = (known after apply) + id = (known after apply) + policy = { + az = { + rpo_in_secs = 60 + rto_in_secs = 300 } + hardware = { + rpo_in_secs = 60 + rto_in_secs = 300 } + region = { + rpo_in_secs = 60 + rto_in_secs = 300 } + software = { + rpo_in_secs = 60 + rto_in_secs = 300 } } + policy_arn = (known after apply) + policy_description = (known after apply) + policy_name = (known after apply) + tags = ${MY_TAGS_HERE_THAT_ARE_STATIC_SO_DONT_WORRY} + tier = "MissionCritical" } # module.resilience-hub.random_id.session will be created + resource "random_id""session" { + b64_std = (known after apply) + b64_url = (known after apply) + byte_length = 16 + dec = (known after apply) + hex = (known after apply) + id = (known after apply) }Plan: 3 to add, 0 to change, 0 to destroy.
The apply then runs and fails with:
terraform apply -auto-approve -lock=true 883f898827e550db9874eef38810881f64891f3f
module.resilience-hub.random_id.session: Creating...
module.resilience-hub.random_id.session: Creation complete after 0s [id=h8b5qlGxJCTaNyw3nHHKUQ]
module.resilience-hub.awscc_resiliencehub_resiliency_policy.policy: Creating...
module.resilience-hub.awscc_resiliencehub_resiliency_policy.policy: Creation complete after 5s [id=arn:aws:resiliencehub:us-east-2:901957470878:resiliency-policy/4388ed86-0878-46d9-b181-f7c14c8612f1]
module.resilience-hub.awscc_resiliencehub_app.app: Creating...
module.resilience-hub.awscc_resiliencehub_app.app: Still creating... [10s elapsed]
│ Error: AWS SDK Go Service Operation Incomplete
│
│ with module.resilience-hub.awscc_resiliencehub_app.app,
│ on ../../../modules/resilience-hub/main.tf line 5, in resource "awscc_resiliencehub_app""app":
│ 5:resource"awscc_resiliencehub_app""app" {
│
│ Waiting for Cloud Control API service CreateResource operation completion
│ returned: waiter state transitioned to FAILED. StatusMessage: Invalid app
│ template. App template must have at least 1 resource (Service:
│ Resiliencehub, Status Code:409, Request ID:
│ 4b6f876b-5d6d-4c1a-9ef7-df7458203c48). ErrorCode: AlreadyExist
script returned exit code 1
HOWEVER, the app gets created properly. It even has all the resources that it should and properly categorised. So it actually "works" just fine.
And I wouldn't have any problem with that, but if I run another plan afterwards, it detects the fact that the app is created and it has resources "statically" defined, and it tries to destroy the recreate the app using the resources defined by the s3 state file. Which again fails with the same error, but AGAIN the resource actually gets created properly.
In other words, I'm in a continuous loop.
Is it maybe because I am referencing the "same" state file that is currently being used by terraform ? Does this module specifically require to have its "own respository" with its own "pipeline" ?
Thanks!
The text was updated successfully, but these errors were encountered:
I'm using the latest version of the module.
This is the plan operation that it tries to execute:
The apply then runs and fails with:
HOWEVER, the app gets created properly. It even has all the resources that it should and properly categorised. So it actually "works" just fine.
And I wouldn't have any problem with that, but if I run another plan afterwards, it detects the fact that the app is created and it has resources "statically" defined, and it tries to destroy the recreate the app using the resources defined by the s3 state file. Which again fails with the same error, but AGAIN the resource actually gets created properly.
In other words, I'm in a continuous loop.
Is it maybe because I am referencing the "same" state file that is currently being used by terraform ? Does this module specifically require to have its "own respository" with its own "pipeline" ?
Thanks!
The text was updated successfully, but these errors were encountered: