diff --git a/.changelog/35969.txt b/.changelog/30482.txt similarity index 100% rename from .changelog/35969.txt rename to .changelog/30482.txt diff --git a/.changelog/35970.txt b/.changelog/30965.txt similarity index 100% rename from .changelog/35970.txt rename to .changelog/30965.txt diff --git a/.changelog/35971.txt b/.changelog/33442.txt similarity index 100% rename from .changelog/35971.txt rename to .changelog/33442.txt diff --git a/.changelog/33578.txt b/.changelog/33578.txt new file mode 100644 index 00000000000..9f42ab17bec --- /dev/null +++ b/.changelog/33578.txt @@ -0,0 +1,15 @@ +```release-note:bug +resource/aws_cloudfront_distribution: Fix `IllegalUpdate` errors when updating a staging distribution associated with an `aws_cloudfront_continuous_deployment_policy` +``` + +```release-note:bug +resource/aws_cloudfront_continuous_deployment_policy: Fix `IllegalUpdate` errors when updating a staging `aws_cloudfront_distribution` that is part of continuous deployment +``` + +```release-note:bug +resource/aws_cloudfront_distribution: Fix `StagingDistributionInUse` errors when destroying a distribution associated with an `aws_cloudfront_continuous_deployment_policy` +``` + +```release-note:bug +resource/aws_cloudfront_distribution: Fix `PreconditionFailed` errors when destroying a distribution associated with an `aws_cloudfront_continuous_deployment_policy` +``` \ No newline at end of file diff --git a/.changelog/33660.txt b/.changelog/33660.txt new file mode 100644 index 00000000000..5b9afbf72a2 --- /dev/null +++ b/.changelog/33660.txt @@ -0,0 +1,15 @@ +```release-note:breaking-change +data-source/aws_s3_object: Following migration to [AWS SDK for Go v2](https://aws.github.io/aws-sdk-go-v2/), the `metadata` attribute's [keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always [returned in lowercase](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3#HeadObjectOutput) +``` + +```release-note:note +data-source/aws_s3_object: The `metadata` attribute's keys are now always returned in lowercase. Please modify configurations as necessary +``` + +```release-note:breaking-change +data-source/aws_s3_bucket_object: Following migration to [AWS SDK for Go v2](https://aws.github.io/aws-sdk-go-v2/), the `metadata` attribute's [keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always [returned in lowercase](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3#HeadObjectOutput) +``` + +```release-note:note +data-source/aws_s3_bucket_object: The `metadata` attribute's keys are now always returned in lowercase. Please modify configurations as necessary +``` \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index 79a577eb7ca..86a62f37767 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,19 +8,27 @@ FEATURES: * **New Resource:** `aws_dms_replication_config` ([#32908](https://github.com/hashicorp/terraform-provider-aws/issues/32908)) * **New Resource:** `aws_rds_custom_db_engine_version` ([#33285](https://github.com/hashicorp/terraform-provider-aws/issues/33285)) +* **New Resource:** `aws_vpclattice_service_network` ([#30482](https://github.com/hashicorp/terraform-provider-aws/issues/30482)) ENHANCEMENTS: +* data-source/aws_opensearch_domain: Add `off_peak_window_options` attribute ([#30965](https://github.com/hashicorp/terraform-provider-aws/issues/30965)) * resource/aws_fsx_ontap_volume: Add `bypass_snaplock_enterprise_retention` argument and `snaplock_configuration` configuration block to support [SnapLock](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/snaplock.html) ([#32530](https://github.com/hashicorp/terraform-provider-aws/issues/32530)) * resource/aws_fsx_ontap_volume: Add `copy_tags_to_backups` and `snapshot_policy` arguments ([#32530](https://github.com/hashicorp/terraform-provider-aws/issues/32530)) * resource/aws_fsx_openzfs_volume: Add `delete_volume_options` argument ([#32530](https://github.com/hashicorp/terraform-provider-aws/issues/32530)) * resource/aws_lightsail_bucket: Add `force_delete` argument ([#33586](https://github.com/hashicorp/terraform-provider-aws/issues/33586)) +* resource/aws_opensearch_domain: Add `off_peak_window_options` configuration block ([#30965](https://github.com/hashicorp/terraform-provider-aws/issues/30965)) * resource/aws_opensearch_outbound_connection: Add `connection_properties`, `connection_mode` and `accept_connection` arguments ([#32990](https://github.com/hashicorp/terraform-provider-aws/issues/32990)) +* resource/aws_schemas_schema: Add `JSONSchemaDraft4` schema type support ([#33442](https://github.com/hashicorp/terraform-provider-aws/issues/33442)) * resource/aws_wafv2_rule_group: Add `rate_based_statement.custom_key` configuration block ([#33594](https://github.com/hashicorp/terraform-provider-aws/issues/33594)) * resource/aws_wafv2_web_acl: Add `rate_based_statement.custom_key` configuration block ([#33594](https://github.com/hashicorp/terraform-provider-aws/issues/33594)) BUG FIXES: +* resource/aws_cloudfront_continuous_deployment_policy: Fix `IllegalUpdate` errors when updating a staging `aws_cloudfront_distribution` that is part of continuous deployment ([#33578](https://github.com/hashicorp/terraform-provider-aws/issues/33578)) +* resource/aws_cloudfront_distribution: Fix `IllegalUpdate` errors when updating a staging distribution associated with an `aws_cloudfront_continuous_deployment_policy` ([#33578](https://github.com/hashicorp/terraform-provider-aws/issues/33578)) +* resource/aws_cloudfront_distribution: Fix `PreconditionFailed` errors when destroying a distribution associated with an `aws_cloudfront_continuous_deployment_policy` ([#33578](https://github.com/hashicorp/terraform-provider-aws/issues/33578)) +* resource/aws_cloudfront_distribution: Fix `StagingDistributionInUse` errors when destroying a distribution associated with an `aws_cloudfront_continuous_deployment_policy` ([#33578](https://github.com/hashicorp/terraform-provider-aws/issues/33578)) * resource/aws_glacier_vault_lock: Fail validation if duplicated keys are found in `policy` ([#33570](https://github.com/hashicorp/terraform-provider-aws/issues/33570)) * resource/aws_iam_group_policy: Fail validation if duplicated keys are found in `policy` ([#33570](https://github.com/hashicorp/terraform-provider-aws/issues/33570)) * resource/aws_iam_policy: Fail validation if duplicated keys are found in `policy` ([#33570](https://github.com/hashicorp/terraform-provider-aws/issues/33570)) @@ -85,7 +93,7 @@ ENHANCEMENTS: * resource/aws_s3_object_copy: Add `checksum_algorithm` argument and `checksum_crc32`, `checksum_crc32c`, `checksum_sha1` and `checksum_sha256` attributes ([#33358](https://github.com/hashicorp/terraform-provider-aws/issues/33358)) * resource/aws_s3control_multi_region_access_point: Add `details.region.bucket_account_id` argument to support [cross-account Multi-Region Access Points](https://docs.aws.amazon.com/AmazonS3/latest/userguide/multi-region-access-point-buckets.html) ([#33416](https://github.com/hashicorp/terraform-provider-aws/issues/33416)) * resource/aws_s3control_multi_region_access_point: Add `details.region.region` attribute ([#33416](https://github.com/hashicorp/terraform-provider-aws/issues/33416)) -* resource/aws_schemas_schema: Add `JSONSchemaDraft4` schema type support ([#35971](https://github.com/hashicorp/terraform-provider-aws/issues/35971)) +* resource/aws_schemas_schema: Add `JSONSchemaDraft4` schema type support ([#33442](https://github.com/hashicorp/terraform-provider-aws/issues/33442)) * resource/aws_transfer_connector: Add `sftp_config` argument and make `as2_config` optional ([#32741](https://github.com/hashicorp/terraform-provider-aws/issues/32741)) * resource/aws_wafv2_web_acl: Retry resource Update on `WAFOptimisticLockException` errors ([#33432](https://github.com/hashicorp/terraform-provider-aws/issues/33432)) @@ -751,7 +759,7 @@ FEATURES: ENHANCEMENTS: * data-source/aws_autoscaling_group: Add `traffic_source` attribute ([#31527](https://github.com/hashicorp/terraform-provider-aws/issues/31527)) -* data-source/aws_opensearch_domain: Add `off_peak_window_options` attribute ([#35970](https://github.com/hashicorp/terraform-provider-aws/issues/35970)) +* data-source/aws_opensearch_domain: Add `off_peak_window_options` attribute ([#30965](https://github.com/hashicorp/terraform-provider-aws/issues/30965)) * provider: Increases size of HTTP request bodies in logs to 1 KB ([#31718](https://github.com/hashicorp/terraform-provider-aws/issues/31718)) * resource/aws_appsync_graphql_api: Add `visibility` argument ([#31369](https://github.com/hashicorp/terraform-provider-aws/issues/31369)) * resource/aws_appsync_graphql_api: Add plan time validation for `log_config.cloudwatch_logs_role_arn` ([#31369](https://github.com/hashicorp/terraform-provider-aws/issues/31369)) @@ -766,7 +774,7 @@ ENHANCEMENTS: * resource/aws_lambda_invocation: Add lifecycle_scope CRUD to invoke on each resource state transition ([#29367](https://github.com/hashicorp/terraform-provider-aws/issues/29367)) * resource/aws_lambda_layer_version_permission: Add `skip_destroy` attribute ([#29571](https://github.com/hashicorp/terraform-provider-aws/issues/29571)) * resource/aws_lambda_provisioned_concurrency_configuration: Add `skip_destroy` argument ([#31646](https://github.com/hashicorp/terraform-provider-aws/issues/31646)) -* resource/aws_opensearch_domain: Add `off_peak_window_options` configuration block ([#35970](https://github.com/hashicorp/terraform-provider-aws/issues/35970)) +* resource/aws_opensearch_domain: Add `off_peak_window_options` configuration block ([#30965](https://github.com/hashicorp/terraform-provider-aws/issues/30965)) * resource/aws_sagemaker_endpoint_configuration: Add and `shadow_production_variants.serverless_config.provisioned_concurrency` arguments ([#31398](https://github.com/hashicorp/terraform-provider-aws/issues/31398)) * resource/aws_transfer_server: Add support for `TransferSecurityPolicy-2023-05` `security_policy_name` value ([#31536](https://github.com/hashicorp/terraform-provider-aws/issues/31536)) diff --git a/internal/conns/awsclient.go b/internal/conns/awsclient.go index 40f2242413a..1b833cb6ecd 100644 --- a/internal/conns/awsclient.go +++ b/internal/conns/awsclient.go @@ -42,6 +42,14 @@ type AWSClient struct { stsRegion string // From provider configuration. } +// CredentialsProvider returns the AWS SDK for Go v2 credentials provider. +func (client *AWSClient) CredentialsProvider() aws_sdkv2.CredentialsProvider { + if client.awsConfig == nil { + return nil + } + return client.awsConfig.Credentials +} + // PartitionHostname returns a hostname with the provider domain suffix for the partition // e.g. PREFIX.amazonaws.com // The prefix should not contain a trailing period. diff --git a/internal/service/cloudfront/continuous_deployment_policy.go b/internal/service/cloudfront/continuous_deployment_policy.go index 5bcc600beb3..d91bed3b4b0 100644 --- a/internal/service/cloudfront/continuous_deployment_policy.go +++ b/internal/service/cloudfront/continuous_deployment_policy.go @@ -271,16 +271,9 @@ func (r *resourceContinuousDeploymentPolicy) Delete(ctx context.Context, req res return } - in := &cloudfront.DeleteContinuousDeploymentPolicyInput{ - Id: aws.String(state.ID.ValueString()), - IfMatch: aws.String(state.ETag.ValueString()), - } + err := DeleteCDP(ctx, conn, state.ID.ValueString()) - _, err := conn.DeleteContinuousDeploymentPolicyWithContext(ctx, in) if err != nil { - if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchContinuousDeploymentPolicy) { - return - } resp.Diagnostics.AddError( create.ProblemStandardMessage(names.CloudFront, create.ErrActionDeleting, ResNameContinuousDeploymentPolicy, state.ID.String(), err), err.Error(), @@ -289,6 +282,78 @@ func (r *resourceContinuousDeploymentPolicy) Delete(ctx context.Context, req res } } +func DeleteCDP(ctx context.Context, conn *cloudfront.CloudFront, id string) error { + etag, err := cdpETag(ctx, conn, id) + if tfresource.NotFound(err) { + return nil + } + + if err != nil { + return err + } + + in := &cloudfront.DeleteContinuousDeploymentPolicyInput{ + Id: aws.String(id), + IfMatch: etag, + } + + _, err = conn.DeleteContinuousDeploymentPolicyWithContext(ctx, in) + if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchContinuousDeploymentPolicy) { + return nil + } + + if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodePreconditionFailed, cloudfront.ErrCodeInvalidIfMatchVersion) { + etag, err := cdpETag(ctx, conn, id) + if tfresource.NotFound(err) { + return nil + } + + if err != nil { + return err + } + + in.SetIfMatch(aws.StringValue(etag)) + + _, err = conn.DeleteContinuousDeploymentPolicyWithContext(ctx, in) + if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchContinuousDeploymentPolicy) { + return nil + } + } + + return err +} + +func disableContinuousDeploymentPolicy(ctx context.Context, conn *cloudfront.CloudFront, id string) error { + out, err := FindContinuousDeploymentPolicyByID(ctx, conn, id) + if tfresource.NotFound(err) || out == nil || out.ContinuousDeploymentPolicy == nil || out.ContinuousDeploymentPolicy.ContinuousDeploymentPolicyConfig == nil { + return nil + } + + if !aws.BoolValue(out.ContinuousDeploymentPolicy.ContinuousDeploymentPolicyConfig.Enabled) { + return nil + } + + out.ContinuousDeploymentPolicy.ContinuousDeploymentPolicyConfig.SetEnabled(false) + + in := &cloudfront.UpdateContinuousDeploymentPolicyInput{ + Id: out.ContinuousDeploymentPolicy.Id, + IfMatch: out.ETag, + ContinuousDeploymentPolicyConfig: out.ContinuousDeploymentPolicy.ContinuousDeploymentPolicyConfig, + } + + _, err = conn.UpdateContinuousDeploymentPolicyWithContext(ctx, in) + return err +} + +func cdpETag(ctx context.Context, conn *cloudfront.CloudFront, id string) (*string, error) { + output, err := FindContinuousDeploymentPolicyByID(ctx, conn, id) + if err != nil { + return nil, err + } + + return output.ETag, nil +} + func (r *resourceContinuousDeploymentPolicy) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { resource.ImportStatePassthroughID(ctx, path.Root("id"), req, resp) } diff --git a/internal/service/cloudfront/continuous_deployment_policy_test.go b/internal/service/cloudfront/continuous_deployment_policy_test.go index bc5520ea499..49a296cd222 100644 --- a/internal/service/cloudfront/continuous_deployment_policy_test.go +++ b/internal/service/cloudfront/continuous_deployment_policy_test.go @@ -11,6 +11,7 @@ import ( "github.com/aws/aws-sdk-go/service/cloudfront" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -20,6 +21,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) +const ( + defaultDomain = "www.example.com" +) + func TestAccCloudFrontContinuousDeploymentPolicy_basic(t *testing.T) { ctx := acctest.Context(t) var policy cloudfront.GetContinuousDeploymentPolicyOutput @@ -39,7 +44,7 @@ func TestAccCloudFrontContinuousDeploymentPolicy_basic(t *testing.T) { CheckDestroy: testAccCheckContinuousDeploymentPolicyDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccContinuousDeploymentPolicyConfig_init(), + Config: testAccContinuousDeploymentPolicyConfig_init(defaultDomain), Check: resource.ComposeTestCheckFunc( testAccCheckDistributionExists(ctx, stagingDistributionResourceName, &stagingDistribution), testAccCheckDistributionExists(ctx, productionDistributionResourceName, &productionDistribution), @@ -93,7 +98,7 @@ func TestAccCloudFrontContinuousDeploymentPolicy_disappears(t *testing.T) { CheckDestroy: testAccCheckContinuousDeploymentPolicyDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccContinuousDeploymentPolicyConfig_init(), + Config: testAccContinuousDeploymentPolicyConfig_init(defaultDomain), Check: resource.ComposeTestCheckFunc( testAccCheckDistributionExists(ctx, stagingDistributionResourceName, &stagingDistribution), testAccCheckDistributionExists(ctx, productionDistributionResourceName, &productionDistribution), @@ -125,7 +130,7 @@ func TestAccCloudFrontContinuousDeploymentPolicy_trafficConfig(t *testing.T) { CheckDestroy: testAccCheckContinuousDeploymentPolicyDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccContinuousDeploymentPolicyConfig_init(), + Config: testAccContinuousDeploymentPolicyConfig_init(defaultDomain), Check: resource.ComposeTestCheckFunc( testAccCheckDistributionExists(ctx, stagingDistributionResourceName, &stagingDistribution), testAccCheckDistributionExists(ctx, productionDistributionResourceName, &productionDistribution), @@ -133,7 +138,7 @@ func TestAccCloudFrontContinuousDeploymentPolicy_trafficConfig(t *testing.T) { ), }, { - Config: testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(false, "0.01", 300, 600), + Config: testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(false, "0.01", 300, 600, defaultDomain), Check: resource.ComposeTestCheckFunc( testAccCheckContinuousDeploymentPolicyExists(ctx, resourceName, &policy), resource.TestCheckResourceAttr(resourceName, "enabled", "false"), @@ -153,7 +158,7 @@ func TestAccCloudFrontContinuousDeploymentPolicy_trafficConfig(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(true, "0.02", 600, 1200), + Config: testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(true, "0.02", 600, 1200, defaultDomain), Check: resource.ComposeTestCheckFunc( testAccCheckContinuousDeploymentPolicyExists(ctx, resourceName, &policy), resource.TestCheckResourceAttr(resourceName, "enabled", "true"), @@ -202,6 +207,81 @@ func TestAccCloudFrontContinuousDeploymentPolicy_trafficConfig(t *testing.T) { }) } +// https://github.com/hashicorp/terraform-provider-aws/issues/33338 +func TestAccCloudFrontContinuousDeploymentPolicy_domainChange(t *testing.T) { + ctx := acctest.Context(t) + var policy cloudfront.GetContinuousDeploymentPolicyOutput + var stagingDistribution cloudfront.Distribution + var productionDistribution cloudfront.Distribution + resourceName := "aws_cloudfront_continuous_deployment_policy.test" + stagingDistributionResourceName := "aws_cloudfront_distribution.staging" + productionDistributionResourceName := "aws_cloudfront_distribution.test" + domain1 := fmt.Sprintf("%s.example.com", sdkacctest.RandomWithPrefix(acctest.ResourcePrefix)) + domain2 := fmt.Sprintf("%s.example.com", sdkacctest.RandomWithPrefix(acctest.ResourcePrefix)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, cloudfront.EndpointsID) + }, + ErrorCheck: acctest.ErrorCheck(t, cloudfront.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckContinuousDeploymentPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccContinuousDeploymentPolicyConfig_init(domain1), + Check: resource.ComposeTestCheckFunc( + testAccCheckDistributionExists(ctx, stagingDistributionResourceName, &stagingDistribution), + testAccCheckDistributionExists(ctx, productionDistributionResourceName, &productionDistribution), + testAccCheckContinuousDeploymentPolicyExists(ctx, resourceName, &policy), + ), + }, + { + Config: testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(true, "0.01", 300, 600, domain1), + Check: resource.ComposeTestCheckFunc( + testAccCheckContinuousDeploymentPolicyExists(ctx, resourceName, &policy), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "traffic_config.*", map[string]string{ + "type": "SingleWeight", + "single_weight_config.#": "1", + "single_weight_config.0.weight": "0.01", + "single_weight_config.0.session_stickiness_config.#": "1", + "single_weight_config.0.session_stickiness_config.0.idle_ttl": "300", + "single_weight_config.0.session_stickiness_config.0.maximum_ttl": "600", + }), + resource.TestCheckTypeSetElemNestedAttrs(stagingDistributionResourceName, "origin.*", map[string]string{ + "domain_name": domain1, + }), + resource.TestCheckTypeSetElemNestedAttrs(productionDistributionResourceName, "origin.*", map[string]string{ + "domain_name": domain1, + }), + ), + }, + { + Config: testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(true, "0.01", 300, 600, domain2), + Check: resource.ComposeTestCheckFunc( + testAccCheckContinuousDeploymentPolicyExists(ctx, resourceName, &policy), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "traffic_config.*", map[string]string{ + "type": "SingleWeight", + "single_weight_config.#": "1", + "single_weight_config.0.weight": "0.01", + "single_weight_config.0.session_stickiness_config.#": "1", + "single_weight_config.0.session_stickiness_config.0.idle_ttl": "300", + "single_weight_config.0.session_stickiness_config.0.maximum_ttl": "600", + }), + resource.TestCheckTypeSetElemNestedAttrs(stagingDistributionResourceName, "origin.*", map[string]string{ + "domain_name": domain2, + }), + resource.TestCheckTypeSetElemNestedAttrs(productionDistributionResourceName, "origin.*", map[string]string{ + "domain_name": domain2, + }), + ), + }, + }, + }) +} + func testAccCheckContinuousDeploymentPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) @@ -249,8 +329,8 @@ func testAccCheckContinuousDeploymentPolicyExists(ctx context.Context, name stri } } -func testAccContinuousDeploymentPolicyConfigBase_staging() string { - return ` +func testAccContinuousDeploymentPolicyConfigBase_staging(domain string) string { + return fmt.Sprintf(` resource "aws_cloudfront_distribution" "staging" { enabled = true retain_on_delete = false @@ -272,7 +352,7 @@ resource "aws_cloudfront_distribution" "staging" { } origin { - domain_name = "www.example.com" + domain_name = %[1]q origin_id = "test" custom_origin_config { @@ -293,15 +373,15 @@ resource "aws_cloudfront_distribution" "staging" { cloudfront_default_certificate = true } } -` +`, domain) } // The initial production distribution must be created _without_ the continuous // deployment policy attached. Example error: // // InvalidArgument: Continuous deployment policy is not supported during distribution creation. -func testAccContinuousDeploymentPolicyConfigBase_productionInit() string { - return ` +func testAccContinuousDeploymentPolicyConfigBase_productionInit(domain string) string { + return fmt.Sprintf(` resource "aws_cloudfront_distribution" "test" { enabled = true retain_on_delete = false @@ -322,7 +402,7 @@ resource "aws_cloudfront_distribution" "test" { } origin { - domain_name = "www.example.com" + domain_name = %[1]q origin_id = "test" custom_origin_config { @@ -343,11 +423,11 @@ resource "aws_cloudfront_distribution" "test" { cloudfront_default_certificate = true } } -` +`, domain) } -func testAccContinuousDeploymentPolicyConfigBase_production() string { - return ` +func testAccContinuousDeploymentPolicyConfigBase_production(domain string) string { + return fmt.Sprintf(` resource "aws_cloudfront_distribution" "test" { enabled = true retain_on_delete = false @@ -370,7 +450,7 @@ resource "aws_cloudfront_distribution" "test" { } origin { - domain_name = "www.example.com" + domain_name = %[1]q origin_id = "test" custom_origin_config { @@ -391,7 +471,7 @@ resource "aws_cloudfront_distribution" "test" { cloudfront_default_certificate = true } } -` +`, domain) } // testAccContinuousDeploymentPolicyConfig_init initializes the staging and production @@ -405,10 +485,10 @@ resource "aws_cloudfront_distribution" "test" { // // ContinuousDeploymentPolicyInUse: The specified continuous deployment policy is // currently associated with a distribution. -func testAccContinuousDeploymentPolicyConfig_init() string { +func testAccContinuousDeploymentPolicyConfig_init(domain string) string { return acctest.ConfigCompose( - testAccContinuousDeploymentPolicyConfigBase_staging(), - testAccContinuousDeploymentPolicyConfigBase_productionInit(), + testAccContinuousDeploymentPolicyConfigBase_staging(domain), + testAccContinuousDeploymentPolicyConfigBase_productionInit(domain), ` resource "aws_cloudfront_continuous_deployment_policy" "test" { enabled = false @@ -430,8 +510,8 @@ resource "aws_cloudfront_continuous_deployment_policy" "test" { func testAccContinuousDeploymentPolicyConfig_basic() string { return acctest.ConfigCompose( - testAccContinuousDeploymentPolicyConfigBase_staging(), - testAccContinuousDeploymentPolicyConfigBase_production(), + testAccContinuousDeploymentPolicyConfigBase_staging(defaultDomain), + testAccContinuousDeploymentPolicyConfigBase_production(defaultDomain), ` resource "aws_cloudfront_continuous_deployment_policy" "test" { enabled = false @@ -451,10 +531,10 @@ resource "aws_cloudfront_continuous_deployment_policy" "test" { `) } -func testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(enabled bool, weight string, idleTTL, maxTTL int) string { +func testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleWeight(enabled bool, weight string, idleTTL, maxTTL int, domain string) string { return acctest.ConfigCompose( - testAccContinuousDeploymentPolicyConfigBase_staging(), - testAccContinuousDeploymentPolicyConfigBase_production(), + testAccContinuousDeploymentPolicyConfigBase_staging(domain), + testAccContinuousDeploymentPolicyConfigBase_production(domain), fmt.Sprintf(` resource "aws_cloudfront_continuous_deployment_policy" "test" { enabled = %[1]t @@ -480,8 +560,8 @@ resource "aws_cloudfront_continuous_deployment_policy" "test" { func testAccContinuousDeploymentPolicyConfig_TrafficConfig_singleHeader(enabled bool, header, value string) string { return acctest.ConfigCompose( - testAccContinuousDeploymentPolicyConfigBase_staging(), - testAccContinuousDeploymentPolicyConfigBase_production(), + testAccContinuousDeploymentPolicyConfigBase_staging(defaultDomain), + testAccContinuousDeploymentPolicyConfigBase_production(defaultDomain), fmt.Sprintf(` resource "aws_cloudfront_continuous_deployment_policy" "test" { enabled = %[1]t diff --git a/internal/service/cloudfront/distribution.go b/internal/service/cloudfront/distribution.go index 4870ca0f6d6..8f8bc1389a1 100644 --- a/internal/service/cloudfront/distribution.go +++ b/internal/service/cloudfront/distribution.go @@ -232,6 +232,7 @@ func ResourceDistribution() *schema.Resource { "continuous_deployment_policy_id": { Type: schema.TypeString, Optional: true, + Computed: true, }, "comment": { Type: schema.TypeString, @@ -625,7 +626,7 @@ func ResourceDistribution() *schema.Resource { }, "origin_shield_region": { Type: schema.TypeString, - Required: true, + Optional: true, ValidateFunc: validation.StringMatch(regionRegexp, "must be a valid AWS Region Code"), }, }, @@ -870,7 +871,7 @@ func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, met if d.Get("wait_for_deployment").(bool) { log.Printf("[DEBUG] Waiting until CloudFront Distribution (%s) is deployed", d.Id()) - if err := DistributionWaitUntilDeployed(ctx, d.Id(), meta); err != nil { + if err := WaitDistributionDeployed(ctx, conn, d.Id()); err != nil { return sdkdiag.AppendErrorf(diags, "waiting until CloudFront Distribution (%s) is deployed: %s", d.Id(), err) } } @@ -964,7 +965,7 @@ func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, met if d.Get("wait_for_deployment").(bool) { log.Printf("[DEBUG] Waiting until CloudFront Distribution (%s) is deployed", d.Id()) - if err := DistributionWaitUntilDeployed(ctx, d.Id(), meta); err != nil { + if err := WaitDistributionDeployed(ctx, conn, d.Id()); err != nil { return sdkdiag.AppendErrorf(diags, "waiting until CloudFront Distribution (%s) is deployed: %s", d.Id(), err) } } @@ -977,114 +978,134 @@ func resourceDistributionDelete(ctx context.Context, d *schema.ResourceData, met var diags diag.Diagnostics conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) - if d.Get("retain_on_delete").(bool) { - // Check if we need to disable first. - output, err := FindDistributionByID(ctx, conn, d.Id()) - - if err != nil { - return sdkdiag.AppendErrorf(diags, "reading CloudFront Distribution (%s): %s", d.Id(), err) - } + if d.Get("arn").(string) == "" { + diags = append(diags, resourceDistributionRead(ctx, d, meta)...) + } - if !aws.BoolValue(output.Distribution.DistributionConfig.Enabled) { - log.Printf("[WARN] Removing CloudFront Distribution ID %q with `retain_on_delete` set. Please delete this distribution manually.", d.Id()) - return diags + if v := d.Get("continuous_deployment_policy_id").(string); v != "" { + if err := disableContinuousDeploymentPolicy(ctx, conn, v); err != nil { + return create.DiagError(names.CloudFront, create.ErrActionDeleting, ResNameDistribution, d.Id(), err) } - input := &cloudfront.UpdateDistributionInput{ - DistributionConfig: output.Distribution.DistributionConfig, - Id: aws.String(d.Id()), - IfMatch: output.ETag, + if err := WaitDistributionDeployed(ctx, conn, d.Id()); err != nil { + return diag.Errorf("waiting until CloudFront Distribution (%s) is deployed: %s", d.Id(), err) } - input.DistributionConfig.Enabled = aws.Bool(false) - - _, err = conn.UpdateDistributionWithContext(ctx, input) + } - if err != nil { - return sdkdiag.AppendErrorf(diags, "disabling CloudFront Distribution (%s): %s", d.Id(), err) - } + if err := disableDistribution(ctx, conn, d.Id()); err != nil { + return diag.Errorf("disabling CloudFront Distribution (%s): %s", d.Id(), err) + } + if d.Get("retain_on_delete").(bool) { log.Printf("[WARN] Removing CloudFront Distribution ID %q with `retain_on_delete` set. Please delete this distribution manually.", d.Id()) return diags } - deleteDistroInput := &cloudfront.DeleteDistributionInput{ - Id: aws.String(d.Id()), - IfMatch: aws.String(d.Get("etag").(string)), + err := deleteDistribution(ctx, conn, d.Id()) + if err == nil || tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchDistribution) { + return diags + } + + // Disable distribution if it is not yet disabled and attempt deletion again. + // Here we update via the deployed configuration to ensure we are not submitting an out of date + // configuration from the Terraform configuration, should other changes have occurred manually. + if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeDistributionNotDisabled) { + if err = disableDistribution(ctx, conn, d.Id()); err != nil { + return diag.Errorf("disabling CloudFront Distribution (%s): %s", d.Id(), err) + } + + _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, 3*time.Minute, func() (interface{}, error) { + return nil, deleteDistribution(ctx, conn, d.Id()) + }, cloudfront.ErrCodeDistributionNotDisabled) } - log.Printf("[DEBUG] Deleting CloudFront Distribution: %s", d.Id()) - _, err := conn.DeleteDistributionWithContext(ctx, deleteDistroInput) + if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodePreconditionFailed, cloudfront.ErrCodeInvalidIfMatchVersion) { + _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, 1*time.Minute, func() (interface{}, error) { + return nil, deleteDistribution(ctx, conn, d.Id()) + }, cloudfront.ErrCodePreconditionFailed) + } - if err == nil || tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchDistribution) { + if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchDistribution) { return diags } - // Refresh our ETag if it is out of date and attempt deletion again. - if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeInvalidIfMatchVersion) { - var output *cloudfront.GetDistributionOutput - output, err = FindDistributionByID(ctx, conn, d.Id()) - - if err != nil { - return sdkdiag.AppendErrorf(diags, "reading CloudFront Distribution (%s): %s", d.Id(), err) + if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeDistributionNotDisabled) { + if err = disableDistribution(ctx, conn, d.Id()); err != nil { + return diag.Errorf("disabling CloudFront Distribution (%s): %s", d.Id(), err) } - deleteDistroInput.IfMatch = output.ETag + err = deleteDistribution(ctx, conn, d.Id()) + } - _, err = conn.DeleteDistributionWithContext(ctx, deleteDistroInput) + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting CloudFront Distribution (%s): %s", d.Id(), err) } - // Disable distribution if it is not yet disabled and attempt deletion again. - // Here we update via the deployed configuration to ensure we are not submitting an out of date - // configuration from the Terraform configuration, should other changes have occurred manually. - if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeDistributionNotDisabled) { - var output *cloudfront.GetDistributionOutput - output, err = FindDistributionByID(ctx, conn, d.Id()) + return diags +} - if err != nil { - return sdkdiag.AppendErrorf(diags, "reading CloudFront Distribution (%s): %s", d.Id(), err) - } +func deleteDistribution(ctx context.Context, conn *cloudfront.CloudFront, id string) error { + etag, err := distroETag(ctx, conn, id) + if err != nil { + return err + } - updateDistroInput := &cloudfront.UpdateDistributionInput{ - DistributionConfig: output.Distribution.DistributionConfig, - Id: aws.String(d.Id()), - IfMatch: output.ETag, - } - updateDistroInput.DistributionConfig.Enabled = aws.Bool(false) - var updateDistroOutput *cloudfront.UpdateDistributionOutput + in := &cloudfront.DeleteDistributionInput{ + Id: aws.String(id), + IfMatch: aws.String(etag), + } - updateDistroOutput, err = conn.UpdateDistributionWithContext(ctx, updateDistroInput) + if _, err := conn.DeleteDistributionWithContext(ctx, in); err != nil { + return err + } - if err != nil { - return sdkdiag.AppendErrorf(diags, "disabling CloudFront Distribution (%s): %s", d.Id(), err) - } + if err := WaitDistributionDeleted(ctx, conn, id); err != nil && !tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchDistribution) { + return err + } - if err := DistributionWaitUntilDeployed(ctx, d.Id(), meta); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting until CloudFront Distribution (%s) is deployed: %s", d.Id(), err) - } + return nil +} - deleteDistroInput.IfMatch = updateDistroOutput.ETag +func distroETag(ctx context.Context, conn *cloudfront.CloudFront, id string) (string, error) { + output, err := FindDistributionByID(ctx, conn, id) + if err != nil { + return "", err + } - _, err = conn.DeleteDistributionWithContext(ctx, deleteDistroInput) + return aws.StringValue(output.ETag), nil +} - // CloudFront has eventual consistency issues even for "deployed" state. - // Occasionally the DeleteDistribution call will return this error as well, in which retries will succeed: - // * PreconditionFailed: The request failed because it didn't meet the preconditions in one or more request-header fields - if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeDistributionNotDisabled, cloudfront.ErrCodePreconditionFailed) { - _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, 2*time.Minute, func() (interface{}, error) { - return conn.DeleteDistributionWithContext(ctx, deleteDistroInput) - }, cloudfront.ErrCodeDistributionNotDisabled, cloudfront.ErrCodePreconditionFailed) - } +func disableDistribution(ctx context.Context, conn *cloudfront.CloudFront, id string) error { + if err := WaitDistributionDeployed(ctx, conn, id); err != nil { + return err } - if tfawserr.ErrCodeEquals(err, cloudfront.ErrCodeNoSuchDistribution) { - return diags + out, err := FindDistributionByID(ctx, conn, id) + if err != nil { + return err + } + + if !aws.BoolValue(out.Distribution.DistributionConfig.Enabled) { + return nil } + in := &cloudfront.UpdateDistributionInput{ + DistributionConfig: out.Distribution.DistributionConfig, + Id: aws.String(id), + IfMatch: out.ETag, + } + in.DistributionConfig.Enabled = aws.Bool(false) + + _, err = conn.UpdateDistributionWithContext(ctx, in) if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting CloudFront Distribution (%s): %s", d.Id(), err) + return err } - return diags + if err := WaitDistributionDeployed(ctx, conn, id); err != nil { + return err + } + + return nil } func FindDistributionByID(ctx context.Context, conn *cloudfront.CloudFront, id string) (*cloudfront.GetDistributionOutput, error) { @@ -1112,14 +1133,55 @@ func FindDistributionByID(ctx context.Context, conn *cloudfront.CloudFront, id s return output, nil } +func FindDistributionByDomainName(ctx context.Context, conn *cloudfront.CloudFront, name string) (*cloudfront.DistributionSummary, error) { + var dist *cloudfront.DistributionSummary + + input := &cloudfront.ListDistributionsInput{} + + err := conn.ListDistributionsPagesWithContext(ctx, input, func(page *cloudfront.ListDistributionsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, d := range page.DistributionList.Items { + if d == nil { + continue + } + + if aws.StringValue(d.DomainName) == name { + dist = d + return false + } + } + + return !lastPage + }) + + return dist, err +} + // resourceAwsCloudFrontWebDistributionWaitUntilDeployed blocks until the // distribution is deployed. It currently takes exactly 15 minutes to deploy // but that might change in the future. -func DistributionWaitUntilDeployed(ctx context.Context, id string, meta interface{}) error { +func WaitDistributionDeployed(ctx context.Context, conn *cloudfront.CloudFront, id string) error { stateConf := &retry.StateChangeConf{ Pending: []string{"InProgress"}, Target: []string{"Deployed"}, - Refresh: resourceWebDistributionStateRefreshFunc(ctx, id, meta), + Refresh: distributionRefreshFunc(ctx, conn, id), + Timeout: 90 * time.Minute, + MinTimeout: 15 * time.Second, + Delay: 1 * time.Minute, + } + + _, err := stateConf.WaitForStateContext(ctx) + return err +} + +func WaitDistributionDeleted(ctx context.Context, conn *cloudfront.CloudFront, id string) error { + stateConf := &retry.StateChangeConf{ + Pending: []string{"InProgress", "Deployed"}, + Target: []string{}, + Refresh: distributionRefreshFunc(ctx, conn, id), Timeout: 90 * time.Minute, MinTimeout: 15 * time.Second, Delay: 1 * time.Minute, @@ -1130,9 +1192,8 @@ func DistributionWaitUntilDeployed(ctx context.Context, id string, meta interfac } // The refresh function for resourceAwsCloudFrontWebDistributionWaitUntilDeployed. -func resourceWebDistributionStateRefreshFunc(ctx context.Context, id string, meta interface{}) retry.StateRefreshFunc { +func distributionRefreshFunc(ctx context.Context, conn *cloudfront.CloudFront, id string) retry.StateRefreshFunc { return func() (interface{}, string, error) { - conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.GetDistributionInput{ Id: aws.String(id), } diff --git a/internal/service/cloudfront/distribution_configuration_structure.go b/internal/service/cloudfront/distribution_configuration_structure.go index 699da1b6a77..f7a2e0e7448 100644 --- a/internal/service/cloudfront/distribution_configuration_structure.go +++ b/internal/service/cloudfront/distribution_configuration_structure.go @@ -107,13 +107,10 @@ func flattenDistributionConfig(d *schema.ResourceData, distributionConfig *cloud d.Set("staging", distributionConfig.Staging) d.Set("web_acl_id", distributionConfig.WebACLId) - if !aws.BoolValue(distributionConfig.Staging) { - // Only set this for production distributions. While staging distributions do - // return a value when their domain name is referenced in a continuous deployment - // policy, this attribute is optional (not optional/computed) to correctly - // trigger changes when a policy is removed from a production distribution. - d.Set("continuous_deployment_policy_id", distributionConfig.ContinuousDeploymentPolicyId) - } + // Not having this set for staging distributions causes IllegalUpdate errors when making updates of any kind. + // If this absolutely must not be optional/computed, the policy ID will need to be retrieved and set for each + // API call for staging distributions. + d.Set("continuous_deployment_policy_id", distributionConfig.ContinuousDeploymentPolicyId) if distributionConfig.CustomErrorResponses != nil { err = d.Set("custom_error_response", FlattenCustomErrorResponses(distributionConfig.CustomErrorResponses)) diff --git a/internal/service/cloudfront/distribution_test.go b/internal/service/cloudfront/distribution_test.go index d1c94d4702d..5c61eaf8e4a 100644 --- a/internal/service/cloudfront/distribution_test.go +++ b/internal/service/cloudfront/distribution_test.go @@ -589,14 +589,6 @@ func TestAccCloudFrontDistribution_Origin_originShield(t *testing.T) { Config: testAccDistributionConfig_originItem(rName, originShieldItem(`null`, `data.aws_region.current.name`)), ExpectError: regexache.MustCompile(`Missing required argument`), }, - { - Config: testAccDistributionConfig_originItem(rName, originShieldItem(`false`, `null`)), - ExpectError: regexache.MustCompile(`Missing required argument`), - }, - { - Config: testAccDistributionConfig_originItem(rName, originShieldItem(`true`, `null`)), - ExpectError: regexache.MustCompile(`Missing required argument`), - }, { Config: testAccDistributionConfig_originItem(rName, originShieldItem(`false`, `""`)), ExpectError: regexache.MustCompile(`.*must be a valid AWS Region Code.*`), @@ -1640,7 +1632,7 @@ func testAccCheckDistributionDisappears(ctx context.Context, distribution *cloud func testAccCheckDistributionWaitForDeployment(ctx context.Context, distribution *cloudfront.Distribution) resource.TestCheckFunc { return func(s *terraform.State) error { - return tfcloudfront.DistributionWaitUntilDeployed(ctx, aws.StringValue(distribution.Id), acctest.Provider.Meta()) + return tfcloudfront.WaitDistributionDeployed(ctx, acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx), aws.StringValue(distribution.Id)) } } diff --git a/internal/service/cloudfront/sweep.go b/internal/service/cloudfront/sweep.go index ce762d67b3a..eaa53306eb7 100644 --- a/internal/service/cloudfront/sweep.go +++ b/internal/service/cloudfront/sweep.go @@ -27,6 +27,12 @@ func init() { }, }) + // DO NOT add a continuous deployment policy sweeper as these are swept as part of the distribution sweeper + // resource.AddTestSweepers("aws_cloudfront_continuous_deployment_policy", &resource.Sweeper{ + // Name: "aws_cloudfront_continuous_deployment_policy", + // F: sweepContinuousDeploymentPolicies, + //}) + resource.AddTestSweepers("aws_cloudfront_distribution", &resource.Sweeper{ Name: "aws_cloudfront_distribution", F: sweepDistributions, @@ -58,6 +64,9 @@ func init() { resource.AddTestSweepers("aws_cloudfront_monitoring_subscription", &resource.Sweeper{ Name: "aws_cloudfront_monitoring_subscription", F: sweepMonitoringSubscriptions, + Dependencies: []string{ + "aws_cloudfront_distribution", + }, }) resource.AddTestSweepers("aws_cloudfront_origin_access_control", &resource.Sweeper{ @@ -151,6 +160,26 @@ func sweepCachePolicies(region string) error { } func sweepDistributions(region string) error { + // sweep: + // 1. Production Distributions + if err := sweepDistributionsByProductionStaging(region, false); err != nil { + log.Printf("[WARN] %s", err) + } + + // 2. Continuous Deployment Policies + if err := sweepContinuousDeploymentPolicies(region); err != nil { + log.Printf("[WARN] %s", err) + } + + // 3. Staging Distributions + if err := sweepDistributionsByProductionStaging(region, true); err != nil { + log.Printf("[WARN] %s", err) + } + + return nil +} + +func sweepDistributionsByProductionStaging(region string, staging bool) error { ctx := sweep.Context(region) client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { @@ -160,6 +189,12 @@ func sweepDistributions(region string) error { input := &cloudfront.ListDistributionsInput{} sweepResources := make([]sweep.Sweepable, 0) + if staging { + log.Printf("[INFO] Sweeping staging distributions") + } else { + log.Printf("[INFO] Sweeping production distributions") + } + err = conn.ListDistributionsPagesWithContext(ctx, input, func(page *cloudfront.ListDistributionsOutput, lastPage bool) bool { if page == nil { return !lastPage @@ -179,6 +214,10 @@ func sweepDistributions(region string) error { continue } + if staging != aws.BoolValue(output.Distribution.DistributionConfig.Staging) { + continue + } + r := ResourceDistribution() d := r.Data(nil) d.SetId(id) @@ -208,6 +247,40 @@ func sweepDistributions(region string) error { return nil } +func sweepContinuousDeploymentPolicies(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.CloudFrontConn(ctx) + input := &cloudfront.ListContinuousDeploymentPoliciesInput{} + + // ListContinuousDeploymentPolicies does not have a paginator + for { + output, err := conn.ListContinuousDeploymentPoliciesWithContext(ctx, input) + if err != nil { + log.Printf("[WARN] %s", err) + break + } + + if output == nil || output.ContinuousDeploymentPolicyList == nil || len(output.ContinuousDeploymentPolicyList.Items) == 0 { + continue + } + + for _, cdp := range output.ContinuousDeploymentPolicyList.Items { + DeleteCDP(ctx, conn, aws.StringValue(cdp.ContinuousDeploymentPolicy.Id)) + } + + if output.ContinuousDeploymentPolicyList.NextMarker == nil { + break + } + input.Marker = output.ContinuousDeploymentPolicyList.NextMarker + } + + return nil +} + func sweepFunctions(region string) error { ctx := sweep.Context(region) client, err := sweep.SharedRegionalSweepClient(ctx, region) diff --git a/internal/service/quicksight/sweep.go b/internal/service/quicksight/sweep.go index eee8a96b0cb..64136161580 100644 --- a/internal/service/quicksight/sweep.go +++ b/internal/service/quicksight/sweep.go @@ -459,6 +459,9 @@ func skipSweepError(err error) bool { if tfawserr.ErrMessageContains(err, quicksight.ErrCodeResourceNotFoundException, "Directory information for account") { return true } + if tfawserr.ErrMessageContains(err, quicksight.ErrCodeResourceNotFoundException, "Account information for account") { + return true + } return sweep.SkipSweepError(err) } diff --git a/internal/service/s3/bucket.go b/internal/service/s3/bucket.go index a0f0dc5e045..0e8018af7c4 100644 --- a/internal/service/s3/bucket.go +++ b/internal/service/s3/bucket.go @@ -78,7 +78,7 @@ func ResourceBucket() *schema.Resource { Optional: true, Computed: true, ConflictsWith: []string{"grant"}, - ValidateFunc: validation.StringInSlice(BucketCannedACL_Values(), false), + ValidateFunc: validation.StringInSlice(bucketCannedACL_Values(), false), Deprecated: "Use the aws_s3_bucket_acl resource instead", }, "arn": { diff --git a/internal/service/s3/bucket_accelerate_configuration.go b/internal/service/s3/bucket_accelerate_configuration.go index dc5230d1c98..0a9d32ccdaa 100644 --- a/internal/service/s3/bucket_accelerate_configuration.go +++ b/internal/service/s3/bucket_accelerate_configuration.go @@ -163,6 +163,7 @@ func resourceBucketAccelerateConfigurationDelete(ctx context.Context, d *schema. input.ExpectedBucketOwner = aws.String(expectedBucketOwner) } + log.Printf("[DEBUG] Deleting S3 Bucket Accelerate Configuration: %s", d.Id()) _, err = conn.PutBucketAccelerateConfiguration(ctx, input) if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) { diff --git a/internal/service/s3/bucket_acl.go b/internal/service/s3/bucket_acl.go index 550ee07f8bf..5b4d7a5121a 100644 --- a/internal/service/s3/bucket_acl.go +++ b/internal/service/s3/bucket_acl.go @@ -20,6 +20,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/enum" + tfslices "github.com/hashicorp/terraform-provider-aws/internal/slices" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -112,10 +113,10 @@ func ResourceBucketACL() *schema.Resource { }, }, "acl": { - Type: schema.TypeString, - Optional: true, - ConflictsWith: []string{"access_control_policy"}, - ValidateDiagFunc: enum.Validate[types.BucketCannedACL](), + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"access_control_policy"}, + ValidateFunc: validation.StringInSlice(bucketCannedACL_Values(), false), }, "bucket": { Type: schema.TypeString, @@ -523,3 +524,13 @@ func findBucketACL(ctx context.Context, conn *s3.Client, bucket, expectedBucketO return output, nil } + +// These should be defined in the AWS SDK for Go. There is an issue, https://github.com/aws/aws-sdk-go/issues/2683. +const ( + bucketCannedACLExecRead = "aws-exec-read" + bucketCannedACLLogDeliveryWrite = "log-delivery-write" +) + +func bucketCannedACL_Values() []string { + return tfslices.AppendUnique(enum.Values[types.BucketCannedACL](), bucketCannedACLExecRead, bucketCannedACLLogDeliveryWrite) +} diff --git a/internal/service/s3/bucket_data_source.go b/internal/service/s3/bucket_data_source.go index d00f565012b..776e851c94e 100644 --- a/internal/service/s3/bucket_data_source.go +++ b/internal/service/s3/bucket_data_source.go @@ -8,15 +8,14 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go-v2/feature/s3/manager" + "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go/aws/arn" - "github.com/aws/aws-sdk-go/aws/request" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/aws/aws-sdk-go/service/s3/s3manager" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) // @SDKDataSource("aws_s3_bucket") @@ -25,14 +24,14 @@ func DataSourceBucket() *schema.Resource { ReadWithoutTimeout: dataSourceBucketRead, Schema: map[string]*schema.Schema{ - "bucket": { - Type: schema.TypeString, - Required: true, - }, "arn": { Type: schema.TypeString, Computed: true, }, + "bucket": { + Type: schema.TypeString, + Required: true, + }, "bucket_domain_name": { Type: schema.TypeString, Computed: true, @@ -49,11 +48,11 @@ func DataSourceBucket() *schema.Resource { Type: schema.TypeString, Computed: true, }, - "website_endpoint": { + "website_domain": { Type: schema.TypeString, Computed: true, }, - "website_domain": { + "website_endpoint": { Type: schema.TypeString, Computed: true, }, @@ -63,86 +62,62 @@ func DataSourceBucket() *schema.Resource { func dataSourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + awsClient := meta.(*conns.AWSClient) + conn := awsClient.S3Client(ctx) bucket := d.Get("bucket").(string) + err := findBucket(ctx, conn, bucket) - input := &s3.HeadBucketInput{ - Bucket: aws.String(bucket), + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket (%s): %s", bucket, err) } - log.Printf("[DEBUG] Reading S3 bucket: %s", input) - _, err := conn.HeadBucketWithContext(ctx, input) + region, err := manager.GetBucketRegion(ctx, conn, bucket, + func(o *s3.Options) { + // By default, GetBucketRegion forces virtual host addressing, which + // is not compatible with many non-AWS implementations. Instead, pass + // the provider s3_force_path_style configuration, which defaults to + // false, but allows override. + o.UsePathStyle = awsClient.S3UsePathStyle() + }, + func(o *s3.Options) { + // By default, GetBucketRegion uses anonymous credentials when doing + // a HEAD request to get the bucket region. This breaks in aws-cn regions + // when the account doesn't have an ICP license to host public content. + // Use the current credentials when getting the bucket region. + o.Credentials = awsClient.CredentialsProvider() + }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Failed getting S3 bucket (%s): %s", bucket, err) + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket (%s) Region: %s", bucket, err) } d.SetId(bucket) arn := arn.ARN{ - Partition: meta.(*conns.AWSClient).Partition, + Partition: awsClient.Partition, Service: "s3", Resource: bucket, }.String() d.Set("arn", arn) - d.Set("bucket_domain_name", meta.(*conns.AWSClient).PartitionHostname(fmt.Sprintf("%s.s3", bucket))) - - err = bucketLocation(ctx, meta.(*conns.AWSClient), d, bucket) - if err != nil { - return sdkdiag.AppendErrorf(diags, "getting S3 Bucket location: %s", err) - } - - regionalDomainName, err := BucketRegionalDomainName(bucket, d.Get("region").(string)) - if err != nil { - return sdkdiag.AppendErrorf(diags, "getting S3 Bucket regional domain name: %s", err) - } - d.Set("bucket_regional_domain_name", regionalDomainName) - - return diags -} - -func bucketLocation(ctx context.Context, client *conns.AWSClient, d *schema.ResourceData, bucket string) error { - region, err := s3manager.GetBucketRegionWithClient(ctx, client.S3Conn(ctx), bucket, func(r *request.Request) { - // By default, GetBucketRegion forces virtual host addressing, which - // is not compatible with many non-AWS implementations. Instead, pass - // the provider s3_force_path_style configuration, which defaults to - // false, but allows override. - r.Config.S3ForcePathStyle = client.S3Conn(ctx).Config.S3ForcePathStyle - - // By default, GetBucketRegion uses anonymous credentials when doing - // a HEAD request to get the bucket region. This breaks in aws-cn regions - // when the account doesn't have an ICP license to host public content. - // Use the current credentials when getting the bucket region. - r.Config.Credentials = client.S3Conn(ctx).Config.Credentials - }) - if err != nil { - return err - } - if err := d.Set("region", region); err != nil { - return err - } - - hostedZoneID, err := HostedZoneIDForRegion(region) - if err != nil { - log.Printf("[WARN] %s", err) + d.Set("bucket_domain_name", awsClient.PartitionHostname(fmt.Sprintf("%s.s3", bucket))) + if regionalDomainName, err := BucketRegionalDomainName(bucket, region); err == nil { + d.Set("bucket_regional_domain_name", regionalDomainName) } else { + log.Printf("[WARN] BucketRegionalDomainName: %s", err) + } + if hostedZoneID, err := HostedZoneIDForRegion(region); err == nil { d.Set("hosted_zone_id", hostedZoneID) + } else { + log.Printf("[WARN] HostedZoneIDForRegion: %s", err) } - - _, websiteErr := client.S3Conn(ctx).GetBucketWebsite( - &s3.GetBucketWebsiteInput{ - Bucket: aws.String(bucket), - }, - ) - - if websiteErr == nil { - websiteEndpoint := WebsiteEndpoint(client, bucket, region) - if err := d.Set("website_endpoint", websiteEndpoint.Endpoint); err != nil { - return err - } - if err := d.Set("website_domain", websiteEndpoint.Domain); err != nil { - return err - } + d.Set("region", region) + if _, err := findBucketWebsite(ctx, conn, bucket, ""); err == nil { + website := WebsiteEndpoint(awsClient, bucket, region) + d.Set("website_domain", website.Domain) + d.Set("website_endpoint", website.Endpoint) + } else if !tfresource.NotFound(err) { + log.Printf("[WARN] Reading S3 Bucket (%s) Website: %s", bucket, err) } - return nil + + return diags } diff --git a/internal/service/s3/bucket_data_source_test.go b/internal/service/s3/bucket_data_source_test.go index a7440321d48..5f9e29175a6 100644 --- a/internal/service/s3/bucket_data_source_test.go +++ b/internal/service/s3/bucket_data_source_test.go @@ -7,11 +7,11 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/service/s3" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketDataSource_basic(t *testing.T) { @@ -22,7 +22,7 @@ func TestAccS3BucketDataSource_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -47,7 +47,7 @@ func TestAccS3BucketDataSource_website(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { diff --git a/internal/service/s3/bucket_intelligent_tiering_configuration.go b/internal/service/s3/bucket_intelligent_tiering_configuration.go index 02349daa6ae..c31cd93f99e 100644 --- a/internal/service/s3/bucket_intelligent_tiering_configuration.go +++ b/internal/service/s3/bucket_intelligent_tiering_configuration.go @@ -9,14 +9,15 @@ import ( "log" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -29,6 +30,7 @@ func ResourceBucketIntelligentTieringConfiguration() *schema.Resource { ReadWithoutTimeout: resourceBucketIntelligentTieringConfigurationRead, UpdateWithoutTimeout: resourceBucketIntelligentTieringConfigurationPut, DeleteWithoutTimeout: resourceBucketIntelligentTieringConfigurationDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, @@ -65,10 +67,10 @@ func ResourceBucketIntelligentTieringConfiguration() *schema.Resource { ForceNew: true, }, "status": { - Type: schema.TypeString, - Optional: true, - Default: s3.IntelligentTieringStatusEnabled, - ValidateFunc: validation.StringInSlice(s3.IntelligentTieringStatus_Values(), false), + Type: schema.TypeString, + Optional: true, + Default: types.IntelligentTieringStatusEnabled, + ValidateDiagFunc: enum.Validate[types.IntelligentTieringStatus](), }, "tiering": { Type: schema.TypeSet, @@ -77,9 +79,9 @@ func ResourceBucketIntelligentTieringConfiguration() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "access_tier": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(s3.IntelligentTieringAccessTier_Values(), false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.IntelligentTieringAccessTier](), }, "days": { Type: schema.TypeInt, @@ -94,67 +96,74 @@ func ResourceBucketIntelligentTieringConfiguration() *schema.Resource { func resourceBucketIntelligentTieringConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) - bucketName := d.Get("bucket").(string) - configurationName := d.Get("name").(string) - resourceID := BucketIntelligentTieringConfigurationCreateResourceID(bucketName, configurationName) - apiObject := &s3.IntelligentTieringConfiguration{ - Id: aws.String(configurationName), - Status: aws.String(d.Get("status").(string)), + name := d.Get("name").(string) + intelligentTieringConfiguration := &types.IntelligentTieringConfiguration{ + Id: aws.String(name), + Status: types.IntelligentTieringStatus(d.Get("status").(string)), } if v, ok := d.GetOk("filter"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - apiObject.Filter = expandIntelligentTieringFilter(ctx, v.([]interface{})[0].(map[string]interface{})) + intelligentTieringConfiguration.Filter = expandIntelligentTieringFilter(ctx, v.([]interface{})[0].(map[string]interface{})) } if v, ok := d.GetOk("tiering"); ok && v.(*schema.Set).Len() > 0 { - apiObject.Tierings = expandTierings(v.(*schema.Set).List()) + intelligentTieringConfiguration.Tierings = expandTierings(v.(*schema.Set).List()) } + bucket := d.Get("bucket").(string) input := &s3.PutBucketIntelligentTieringConfigurationInput{ - Bucket: aws.String(bucketName), - Id: aws.String(configurationName), - IntelligentTieringConfiguration: apiObject, + Bucket: aws.String(bucket), + Id: aws.String(name), + IntelligentTieringConfiguration: intelligentTieringConfiguration, } - log.Printf("[DEBUG] Creating S3 Intelligent-Tiering Configuration: %s", input) - _, err := retryWhenBucketNotFound(ctx, func() (interface{}, error) { - return conn.PutBucketIntelligentTieringConfigurationWithContext(ctx, input) - }) + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return conn.PutBucketIntelligentTieringConfiguration(ctx, input) + }, errCodeNoSuchBucket) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating S3 Intelligent-Tiering Configuration (%s): %s", resourceID, err) + return sdkdiag.AppendErrorf(diags, "creating S3 Bucket (%s) Intelligent-Tiering Configuration (%s): %s", bucket, name, err) } - d.SetId(resourceID) + if d.IsNewResource() { + d.SetId(BucketIntelligentTieringConfigurationCreateResourceID(bucket, name)) + + _, err = tfresource.RetryWhenNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findIntelligentTieringConfiguration(ctx, conn, bucket, name) + }) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Intelligent-Tiering Configuration (%s) create: %s", d.Id(), err) + } + } return append(diags, resourceBucketIntelligentTieringConfigurationRead(ctx, d, meta)...) } func resourceBucketIntelligentTieringConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) - - bucketName, configurationName, err := BucketIntelligentTieringConfigurationParseResourceID(d.Id()) + conn := meta.(*conns.AWSClient).S3Client(ctx) + bucket, name, err := BucketIntelligentTieringConfigurationParseResourceID(d.Id()) if err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Intelligent-Tiering Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } - output, err := FindBucketIntelligentTieringConfiguration(ctx, conn, bucketName, configurationName) + output, err := findIntelligentTieringConfiguration(ctx, conn, bucket, name) if !d.IsNewResource() && tfresource.NotFound(err) { - log.Printf("[WARN] S3 Intelligent-Tiering Configuration (%s) not found, removing from state", d.Id()) + log.Printf("[WARN] S3 Bucket Intelligent-Tiering Configuration (%s) not found, removing from state", d.Id()) d.SetId("") return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Intelligent-Tiering Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Intelligent-Tiering Configuration (%s): %s", d.Id(), err) } - d.Set("bucket", bucketName) + d.Set("bucket", bucket) if output.Filter != nil { if err := d.Set("filter", []interface{}{flattenIntelligentTieringFilter(ctx, output.Filter)}); err != nil { return sdkdiag.AppendErrorf(diags, "setting filter: %s", err) @@ -173,26 +182,33 @@ func resourceBucketIntelligentTieringConfigurationRead(ctx context.Context, d *s func resourceBucketIntelligentTieringConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) - - bucketName, configurationName, err := BucketIntelligentTieringConfigurationParseResourceID(d.Id()) + conn := meta.(*conns.AWSClient).S3Client(ctx) + bucket, name, err := BucketIntelligentTieringConfigurationParseResourceID(d.Id()) if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Intelligent-Tiering Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } - log.Printf("[DEBUG] Deleting S3 Intelligent-Tiering Configuration: (%s)", d.Id()) - _, err = conn.DeleteBucketIntelligentTieringConfigurationWithContext(ctx, &s3.DeleteBucketIntelligentTieringConfigurationInput{ - Bucket: aws.String(bucketName), - Id: aws.String(configurationName), + log.Printf("[DEBUG] Deleting S3 Bucket Intelligent-Tiering Configuration: %s", d.Id()) + _, err = conn.DeleteBucketIntelligentTieringConfiguration(ctx, &s3.DeleteBucketIntelligentTieringConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(name), }) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket, errCodeNoSuchConfiguration) { + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeNoSuchConfiguration) { return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Intelligent-Tiering Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Intelligent-Tiering Configuration (%s): %s", d.Id(), err) + } + + _, err = tfresource.RetryUntilNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findIntelligentTieringConfiguration(ctx, conn, bucket, name) + }) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Intelligent-Tiering Configuration (%s) delete: %s", d.Id(), err) } return diags @@ -217,15 +233,15 @@ func BucketIntelligentTieringConfigurationParseResourceID(id string) (string, st return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected bucket-name%[2]sconfiguration-name", id, bucketIntelligentTieringConfigurationResourceIDSeparator) } -func FindBucketIntelligentTieringConfiguration(ctx context.Context, conn *s3.S3, bucketName, configurationName string) (*s3.IntelligentTieringConfiguration, error) { +func findIntelligentTieringConfiguration(ctx context.Context, conn *s3.Client, bucket, name string) (*types.IntelligentTieringConfiguration, error) { input := &s3.GetBucketIntelligentTieringConfigurationInput{ - Bucket: aws.String(bucketName), - Id: aws.String(configurationName), + Bucket: aws.String(bucket), + Id: aws.String(name), } - output, err := conn.GetBucketIntelligentTieringConfigurationWithContext(ctx, input) + output, err := conn.GetBucketIntelligentTieringConfiguration(ctx, input) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket, errCodeNoSuchConfiguration) { + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeNoSuchConfiguration) { return nil, &retry.NotFoundError{ LastError: err, LastRequest: input, @@ -243,7 +259,7 @@ func FindBucketIntelligentTieringConfiguration(ctx context.Context, conn *s3.S3, return output.IntelligentTieringConfiguration, nil } -func expandIntelligentTieringFilter(ctx context.Context, tfMap map[string]interface{}) *s3.IntelligentTieringFilter { +func expandIntelligentTieringFilter(ctx context.Context, tfMap map[string]interface{}) *types.IntelligentTieringFilter { if tfMap == nil { return nil } @@ -254,22 +270,22 @@ func expandIntelligentTieringFilter(ctx context.Context, tfMap map[string]interf prefix = v } - var tags []*s3.Tag + var tags []types.Tag if v, ok := tfMap["tags"].(map[string]interface{}); ok { - tags = Tags(tftags.New(ctx, v)) + tags = tagsV2(tftags.New(ctx, v)) } - apiObject := &s3.IntelligentTieringFilter{} + apiObject := &types.IntelligentTieringFilter{} if prefix == "" { switch len(tags) { case 0: return nil case 1: - apiObject.Tag = tags[0] + apiObject.Tag = &tags[0] default: - apiObject.And = &s3.IntelligentTieringAndOperator{ + apiObject.And = &types.IntelligentTieringAndOperator{ Tags: tags, } } @@ -278,7 +294,7 @@ func expandIntelligentTieringFilter(ctx context.Context, tfMap map[string]interf case 0: apiObject.Prefix = aws.String(prefix) default: - apiObject.And = &s3.IntelligentTieringAndOperator{ + apiObject.And = &types.IntelligentTieringAndOperator{ Prefix: aws.String(prefix), Tags: tags, } @@ -288,30 +304,30 @@ func expandIntelligentTieringFilter(ctx context.Context, tfMap map[string]interf return apiObject } -func expandTiering(tfMap map[string]interface{}) *s3.Tiering { +func expandTiering(tfMap map[string]interface{}) *types.Tiering { if tfMap == nil { return nil } - apiObject := &s3.Tiering{} + apiObject := &types.Tiering{} if v, ok := tfMap["access_tier"].(string); ok && v != "" { - apiObject.AccessTier = aws.String(v) + apiObject.AccessTier = types.IntelligentTieringAccessTier(v) } if v, ok := tfMap["days"].(int); ok && v != 0 { - apiObject.Days = aws.Int64(int64(v)) + apiObject.Days = int32(v) } return apiObject } -func expandTierings(tfList []interface{}) []*s3.Tiering { +func expandTierings(tfList []interface{}) []types.Tiering { if len(tfList) == 0 { return nil } - var apiObjects []*s3.Tiering + var apiObjects []types.Tiering for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -326,13 +342,13 @@ func expandTierings(tfList []interface{}) []*s3.Tiering { continue } - apiObjects = append(apiObjects, apiObject) + apiObjects = append(apiObjects, *apiObject) } return apiObjects } -func flattenIntelligentTieringFilter(ctx context.Context, apiObject *s3.IntelligentTieringFilter) map[string]interface{} { +func flattenIntelligentTieringFilter(ctx context.Context, apiObject *types.IntelligentTieringFilter) map[string]interface{} { if apiObject == nil { return nil } @@ -341,46 +357,37 @@ func flattenIntelligentTieringFilter(ctx context.Context, apiObject *s3.Intellig if apiObject.And == nil { if v := apiObject.Prefix; v != nil { - tfMap["prefix"] = aws.StringValue(v) + tfMap["prefix"] = aws.ToString(v) } if v := apiObject.Tag; v != nil { - tfMap["tags"] = KeyValueTags(ctx, []*s3.Tag{v}).Map() + tfMap["tags"] = keyValueTagsV2(ctx, []types.Tag{*v}).Map() } } else { apiObject := apiObject.And if v := apiObject.Prefix; v != nil { - tfMap["prefix"] = aws.StringValue(v) + tfMap["prefix"] = aws.ToString(v) } if v := apiObject.Tags; v != nil { - tfMap["tags"] = KeyValueTags(ctx, v).Map() + tfMap["tags"] = keyValueTagsV2(ctx, v).Map() } } return tfMap } -func flattenTiering(apiObject *s3.Tiering) map[string]interface{} { - if apiObject == nil { - return nil - } - - tfMap := map[string]interface{}{} - - if v := apiObject.AccessTier; v != nil { - tfMap["access_tier"] = aws.StringValue(v) - } - - if v := apiObject.Days; v != nil { - tfMap["days"] = aws.Int64Value(v) +func flattenTiering(apiObject types.Tiering) map[string]interface{} { + tfMap := map[string]interface{}{ + "access_tier": apiObject.AccessTier, + "days": apiObject.Days, } return tfMap } -func flattenTierings(apiObjects []*s3.Tiering) []interface{} { +func flattenTierings(apiObjects []types.Tiering) []interface{} { if len(apiObjects) == 0 { return nil } @@ -388,10 +395,6 @@ func flattenTierings(apiObjects []*s3.Tiering) []interface{} { var tfList []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { - continue - } - tfList = append(tfList, flattenTiering(apiObject)) } diff --git a/internal/service/s3/bucket_intelligent_tiering_configuration_test.go b/internal/service/s3/bucket_intelligent_tiering_configuration_test.go index e3453a64fb1..9cb32d606e4 100644 --- a/internal/service/s3/bucket_intelligent_tiering_configuration_test.go +++ b/internal/service/s3/bucket_intelligent_tiering_configuration_test.go @@ -8,7 +8,7 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -16,18 +16,19 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketIntelligentTieringConfiguration_basic(t *testing.T) { ctx := acctest.Context(t) - var itc s3.IntelligentTieringConfiguration + var itc types.IntelligentTieringConfiguration rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_intelligent_tiering_configuration.test" bucketResourceName := "aws_s3_bucket.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketIntelligentTieringConfigurationDestroy(ctx), Steps: []resource.TestStep{ @@ -55,13 +56,13 @@ func TestAccS3BucketIntelligentTieringConfiguration_basic(t *testing.T) { func TestAccS3BucketIntelligentTieringConfiguration_disappears(t *testing.T) { ctx := acctest.Context(t) - var itc s3.IntelligentTieringConfiguration + var itc types.IntelligentTieringConfiguration rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_intelligent_tiering_configuration.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketIntelligentTieringConfigurationDestroy(ctx), Steps: []resource.TestStep{ @@ -79,14 +80,14 @@ func TestAccS3BucketIntelligentTieringConfiguration_disappears(t *testing.T) { func TestAccS3BucketIntelligentTieringConfiguration_Filter(t *testing.T) { ctx := acctest.Context(t) - var itc s3.IntelligentTieringConfiguration + var itc types.IntelligentTieringConfiguration rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_intelligent_tiering_configuration.test" bucketResourceName := "aws_s3_bucket.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketIntelligentTieringConfigurationDestroy(ctx), Steps: []resource.TestStep{ @@ -194,6 +195,63 @@ func TestAccS3BucketIntelligentTieringConfiguration_Filter(t *testing.T) { }) } +func testAccCheckBucketIntelligentTieringConfigurationExists(ctx context.Context, n string, v *types.IntelligentTieringConfiguration) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + bucket, name, err := tfs3.BucketIntelligentTieringConfigurationParseResourceID(rs.Primary.ID) + if err != nil { + return err + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) + + output, err := tfs3.FindIntelligentTieringConfiguration(ctx, conn, bucket, name) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccCheckBucketIntelligentTieringConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_s3_bucket_intelligent_tiering_configuration" { + continue + } + + bucket, name, err := tfs3.BucketIntelligentTieringConfigurationParseResourceID(rs.Primary.ID) + if err != nil { + return err + } + + _, err = tfs3.FindIntelligentTieringConfiguration(ctx, conn, bucket, name) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("S3 Intelligent-Tiering Configuration %s still exists", rs.Primary.ID) + } + + return nil + } +} + func testAccBucketIntelligentTieringConfigurationConfig_basic(rName string) string { return fmt.Sprintf(` resource "aws_s3_bucket_intelligent_tiering_configuration" "test" { @@ -346,66 +404,3 @@ resource "aws_s3_bucket" "test" { } `, rName) } - -func testAccCheckBucketIntelligentTieringConfigurationExists(ctx context.Context, n string, v *s3.IntelligentTieringConfiguration) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No S3 Intelligent-Tiering Configuration ID is set") - } - - bucketName, configurationName, err := tfs3.BucketIntelligentTieringConfigurationParseResourceID(rs.Primary.ID) - - if err != nil { - return err - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - output, err := tfs3.FindBucketIntelligentTieringConfiguration(ctx, conn, bucketName, configurationName) - - if err != nil { - return err - } - - *v = *output - - return nil - } -} - -func testAccCheckBucketIntelligentTieringConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { - return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_s3_bucket_intelligent_tiering_configuration" { - continue - } - - bucketName, configurationName, err := tfs3.BucketIntelligentTieringConfigurationParseResourceID(rs.Primary.ID) - - if err != nil { - return err - } - - _, err = tfs3.FindBucketIntelligentTieringConfiguration(ctx, conn, bucketName, configurationName) - - if tfresource.NotFound(err) { - continue - } - - if err != nil { - return err - } - - return fmt.Errorf("S3 Intelligent-Tiering Configuration %s still exists", rs.Primary.ID) - } - - return nil - } -} diff --git a/internal/service/s3/bucket_inventory.go b/internal/service/s3/bucket_inventory.go index f65589d9b6b..71dae46a571 100644 --- a/internal/service/s3/bucket_inventory.go +++ b/internal/service/s3/bucket_inventory.go @@ -9,14 +9,16 @@ import ( "log" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -30,6 +32,7 @@ func ResourceBucketInventory() *schema.Resource { ReadWithoutTimeout: resourceBucketInventoryRead, UpdateWithoutTimeout: resourceBucketInventoryPut, DeleteWithoutTimeout: resourceBucketInventoryDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, @@ -40,30 +43,6 @@ func ResourceBucketInventory() *schema.Resource { Required: true, ForceNew: true, }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringLenBetween(0, 64), - }, - "enabled": { - Type: schema.TypeBool, - Default: true, - Optional: true, - }, - "filter": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "prefix": { - Type: schema.TypeString, - Optional: true, - }, - }, - }, - }, "destination": { Type: schema.TypeList, Required: true, @@ -78,28 +57,15 @@ func ResourceBucketInventory() *schema.Resource { MinItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "format": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{ - s3.InventoryFormatCsv, - s3.InventoryFormatOrc, - s3.InventoryFormatParquet, - }, false), - }, - "bucket_arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, - }, "account_id": { Type: schema.TypeString, Optional: true, ValidateFunc: verify.ValidAccountID, }, - "prefix": { - Type: schema.TypeString, - Optional: true, + "bucket_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, }, "encryption": { Type: schema.TypeList, @@ -135,47 +101,72 @@ func ResourceBucketInventory() *schema.Resource { }, }, }, + "format": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.InventoryFormat](), + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + }, }, }, }, }, }, }, - "schedule": { + "enabled": { + Type: schema.TypeBool, + Default: true, + Optional: true, + }, + "filter": { Type: schema.TypeList, - Required: true, + Optional: true, MaxItems: 1, - MinItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "frequency": { + "prefix": { Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{ - s3.InventoryFrequencyDaily, - s3.InventoryFrequencyWeekly, - }, false), + Optional: true, }, }, }, }, - // TODO: Is there a sensible default for this? "included_object_versions": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{ - s3.InventoryIncludedObjectVersionsCurrent, - s3.InventoryIncludedObjectVersionsAll, - }, false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.InventoryIncludedObjectVersions](), + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 64), }, "optional_fields": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{ - Type: schema.TypeString, - ValidateFunc: validation.StringInSlice(s3.InventoryOptionalField_Values(), false), + Type: schema.TypeString, + ValidateDiagFunc: enum.Validate[types.InventoryOptionalField](), + }, + }, + "schedule": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "frequency": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.InventoryFrequency](), + }, + }, }, - Set: schema.HashString, }, }, } @@ -183,215 +174,161 @@ func ResourceBucketInventory() *schema.Resource { func resourceBucketInventoryPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) - bucket := d.Get("bucket").(string) - name := d.Get("name").(string) + conn := meta.(*conns.AWSClient).S3Client(ctx) - inventoryConfiguration := &s3.InventoryConfiguration{ + name := d.Get("name").(string) + inventoryConfiguration := &types.InventoryConfiguration{ Id: aws.String(name), - IsEnabled: aws.Bool(d.Get("enabled").(bool)), + IsEnabled: d.Get("enabled").(bool), } - if v, ok := d.GetOk("included_object_versions"); ok { - inventoryConfiguration.IncludedObjectVersions = aws.String(v.(string)) + if v, ok := d.GetOk("destination"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + tfMap := v.([]interface{})[0].(map[string]interface{})["bucket"].([]interface{})[0].(map[string]interface{}) + inventoryConfiguration.Destination = &types.InventoryDestination{ + S3BucketDestination: expandInventoryBucketDestination(tfMap), + } } - if v, ok := d.GetOk("optional_fields"); ok { - inventoryConfiguration.OptionalFields = flex.ExpandStringSet(v.(*schema.Set)) + if v, ok := d.GetOk("filter"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + inventoryConfiguration.Filter = expandInventoryFilter(v.([]interface{})[0].(map[string]interface{})) } - if v, ok := d.GetOk("schedule"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - scheduleList := v.([]interface{}) - scheduleMap := scheduleList[0].(map[string]interface{}) - inventoryConfiguration.Schedule = &s3.InventorySchedule{ - Frequency: aws.String(scheduleMap["frequency"].(string)), - } + if v, ok := d.GetOk("included_object_versions"); ok { + inventoryConfiguration.IncludedObjectVersions = types.InventoryIncludedObjectVersions(v.(string)) } - if v, ok := d.GetOk("filter"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - filterList := v.([]interface{}) - filterMap := filterList[0].(map[string]interface{}) - inventoryConfiguration.Filter = expandInventoryFilter(filterMap) + if v, ok := d.GetOk("optional_fields"); ok && v.(*schema.Set).Len() > 0 { + inventoryConfiguration.OptionalFields = flex.ExpandStringyValueSet[types.InventoryOptionalField](v.(*schema.Set)) } - if v, ok := d.GetOk("destination"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - destinationList := v.([]interface{}) - destinationMap := destinationList[0].(map[string]interface{}) - bucketList := destinationMap["bucket"].([]interface{}) - bucketMap := bucketList[0].(map[string]interface{}) - - inventoryConfiguration.Destination = &s3.InventoryDestination{ - S3BucketDestination: expandInventoryBucketDestination(bucketMap), + if v, ok := d.GetOk("schedule"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + tfMap := v.([]interface{})[0].(map[string]interface{}) + inventoryConfiguration.Schedule = &types.InventorySchedule{ + Frequency: types.InventoryFrequency(tfMap["frequency"].(string)), } } + bucket := d.Get("bucket").(string) input := &s3.PutBucketInventoryConfigurationInput{ Bucket: aws.String(bucket), Id: aws.String(name), InventoryConfiguration: inventoryConfiguration, } - log.Printf("[DEBUG] Putting S3 bucket inventory configuration: %s", input) - err := retry.RetryContext(ctx, s3BucketPropagationTimeout, func() *retry.RetryError { - _, err := conn.PutBucketInventoryConfigurationWithContext(ctx, input) + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return conn.PutBucketInventoryConfiguration(ctx, input) + }, errCodeNoSuchBucket) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return retry.RetryableError(err) - } - - if err != nil { - return retry.NonRetryableError(err) - } + if err != nil { + return diag.Errorf("creating S3 Bucket (%s) Inventory: %s", bucket, err) + } - return nil - }) + if d.IsNewResource() { + d.SetId(fmt.Sprintf("%s:%s", bucket, name)) - if tfresource.TimedOut(err) { - _, err = conn.PutBucketInventoryConfigurationWithContext(ctx, input) - } + _, err = tfresource.RetryWhenNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findInventoryConfiguration(ctx, conn, bucket, name) + }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "putting S3 Bucket Inventory Configuration: %s", err) + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Inventory (%s) create: %s", d.Id(), err) + } } - d.SetId(fmt.Sprintf("%s:%s", bucket, name)) - return append(diags, resourceBucketInventoryRead(ctx, d, meta)...) } -func resourceBucketInventoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceBucketInventoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, name, err := BucketInventoryParseID(d.Id()) if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Inventory Configuration (%s): %s", d.Id(), err) - } - - input := &s3.DeleteBucketInventoryConfigurationInput{ - Bucket: aws.String(bucket), - Id: aws.String(name), + return sdkdiag.AppendFromErr(diags, err) } - log.Printf("[DEBUG] Deleting S3 bucket inventory configuration: %s", input) - _, err = conn.DeleteBucketInventoryConfigurationWithContext(ctx, input) + ic, err := findInventoryConfiguration(ctx, conn, bucket, name) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return diags + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] S3 Bucket Inventory (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil } - if tfawserr.ErrCodeEquals(err, errCodeNoSuchConfiguration) { - return diags + if err != nil { + return diag.Errorf("reading S3 Bucket Inventory (%s): %s", d.Id(), err) } - if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Inventory Configuration (%s): %s", d.Id(), err) + d.Set("bucket", bucket) + if v := ic.Destination; v != nil { + tfMap := map[string]interface{}{ + "bucket": flattenInventoryBucketDestination(v.S3BucketDestination), + } + if err := d.Set("destination", []map[string]interface{}{tfMap}); err != nil { + return sdkdiag.AppendErrorf(diags, "setting destination: %s", err) + } + } + d.Set("enabled", ic.IsEnabled) + if err := d.Set("filter", flattenInventoryFilter(ic.Filter)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting filter: %s", err) + } + d.Set("included_object_versions", ic.IncludedObjectVersions) + d.Set("name", name) + d.Set("optional_fields", ic.OptionalFields) + if err := d.Set("schedule", flattenInventorySchedule(ic.Schedule)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting schedule: %s", err) } return diags } -func resourceBucketInventoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceBucketInventoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, name, err := BucketInventoryParseID(d.Id()) if err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Inventory Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } - d.Set("bucket", bucket) - d.Set("name", name) - - input := &s3.GetBucketInventoryConfigurationInput{ + input := &s3.DeleteBucketInventoryConfigurationInput{ Bucket: aws.String(bucket), Id: aws.String(name), } - log.Printf("[DEBUG] Reading S3 bucket inventory configuration: %s", input) - var output *s3.GetBucketInventoryConfigurationOutput - err = retry.RetryContext(ctx, s3BucketPropagationTimeout, func() *retry.RetryError { - var err error - output, err = conn.GetBucketInventoryConfigurationWithContext(ctx, input) - - if d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return retry.RetryableError(err) - } - - if d.IsNewResource() && tfawserr.ErrCodeEquals(err, errCodeNoSuchConfiguration) { - return retry.RetryableError(err) - } - - if err != nil { - return retry.NonRetryableError(err) - } + log.Printf("[DEBUG] Deleting S3 Bucket Inventory: %s", d.Id()) + _, err = conn.DeleteBucketInventoryConfiguration(ctx, input) - return nil - }) - - if tfresource.TimedOut(err) { - output, err = conn.GetBucketInventoryConfigurationWithContext(ctx, input) - } - - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - log.Printf("[WARN] S3 Bucket Inventory Configuration (%s) not found, removing from state", d.Id()) - d.SetId("") - return diags - } - - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, errCodeNoSuchConfiguration) { - log.Printf("[WARN] S3 Bucket Inventory Configuration (%s) not found, removing from state", d.Id()) - d.SetId("") + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeNoSuchConfiguration) { return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "getting S3 Bucket Inventory Configuration (%s): %s", d.Id(), err) - } - - if output == nil || output.InventoryConfiguration == nil { - return sdkdiag.AppendErrorf(diags, "getting S3 Bucket Inventory Configuration (%s): empty response", d.Id()) - } - - d.Set("enabled", output.InventoryConfiguration.IsEnabled) - d.Set("included_object_versions", output.InventoryConfiguration.IncludedObjectVersions) - - if err := d.Set("optional_fields", flex.FlattenStringList(output.InventoryConfiguration.OptionalFields)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting optional_fields: %s", err) - } - - if err := d.Set("filter", flattenInventoryFilter(output.InventoryConfiguration.Filter)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting filter: %s", err) - } - - if err := d.Set("schedule", flattenInventorySchedule(output.InventoryConfiguration.Schedule)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting schedule: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Inventory (%s): %s", d.Id(), err) } - if output.InventoryConfiguration.Destination != nil { - destination := map[string]interface{}{ - "bucket": flattenInventoryBucketDestination(output.InventoryConfiguration.Destination.S3BucketDestination), - } + _, err = tfresource.RetryUntilNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findInventoryConfiguration(ctx, conn, bucket, name) + }) - if err := d.Set("destination", []map[string]interface{}{destination}); err != nil { - return sdkdiag.AppendErrorf(diags, "setting destination: %s", err) - } + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Inventory (%s) delete: %s", d.Id(), err) } return diags } -func expandInventoryFilter(m map[string]interface{}) *s3.InventoryFilter { +func expandInventoryFilter(m map[string]interface{}) *types.InventoryFilter { v, ok := m["prefix"] if !ok { return nil } - return &s3.InventoryFilter{ + return &types.InventoryFilter{ Prefix: aws.String(v.(string)), } } -func flattenInventoryFilter(filter *s3.InventoryFilter) []map[string]interface{} { +func flattenInventoryFilter(filter *types.InventoryFilter) []map[string]interface{} { if filter == nil { return nil } @@ -400,7 +337,7 @@ func flattenInventoryFilter(filter *s3.InventoryFilter) []map[string]interface{} m := make(map[string]interface{}) if filter.Prefix != nil { - m["prefix"] = aws.StringValue(filter.Prefix) + m["prefix"] = aws.ToString(filter.Prefix) } result = append(result, m) @@ -408,20 +345,19 @@ func flattenInventoryFilter(filter *s3.InventoryFilter) []map[string]interface{} return result } -func flattenInventorySchedule(schedule *s3.InventorySchedule) []map[string]interface{} { +func flattenInventorySchedule(schedule *types.InventorySchedule) []map[string]interface{} { result := make([]map[string]interface{}, 0, 1) - - m := make(map[string]interface{}, 1) - m["frequency"] = aws.StringValue(schedule.Frequency) - + m := map[string]interface{}{ + "frequency": schedule.Frequency, + } result = append(result, m) return result } -func expandInventoryBucketDestination(m map[string]interface{}) *s3.InventoryS3BucketDestination { - destination := &s3.InventoryS3BucketDestination{ - Format: aws.String(m["format"].(string)), +func expandInventoryBucketDestination(m map[string]interface{}) *types.InventoryS3BucketDestination { + destination := &types.InventoryS3BucketDestination{ + Format: types.InventoryFormat(m["format"].(string)), Bucket: aws.String(m["bucket_arn"].(string)), } @@ -436,7 +372,7 @@ func expandInventoryBucketDestination(m map[string]interface{}) *s3.InventoryS3B if v, ok := m["encryption"].([]interface{}); ok && len(v) > 0 { encryptionMap := v[0].(map[string]interface{}) - encryption := &s3.InventoryEncryption{} + encryption := &types.InventoryEncryption{} for k, v := range encryptionMap { data := v.([]interface{}) @@ -448,11 +384,11 @@ func expandInventoryBucketDestination(m map[string]interface{}) *s3.InventoryS3B switch k { case "sse_kms": m := data[0].(map[string]interface{}) - encryption.SSEKMS = &s3.SSEKMS{ + encryption.SSEKMS = &types.SSEKMS{ KeyId: aws.String(m["key_id"].(string)), } case "sse_s3": - encryption.SSES3 = &s3.SSES3{} + encryption.SSES3 = &types.SSES3{} } } @@ -462,19 +398,19 @@ func expandInventoryBucketDestination(m map[string]interface{}) *s3.InventoryS3B return destination } -func flattenInventoryBucketDestination(destination *s3.InventoryS3BucketDestination) []map[string]interface{} { +func flattenInventoryBucketDestination(destination *types.InventoryS3BucketDestination) []map[string]interface{} { result := make([]map[string]interface{}, 0, 1) m := map[string]interface{}{ - "format": aws.StringValue(destination.Format), - "bucket_arn": aws.StringValue(destination.Bucket), + "format": destination.Format, + "bucket_arn": aws.ToString(destination.Bucket), } if destination.AccountId != nil { - m["account_id"] = aws.StringValue(destination.AccountId) + m["account_id"] = aws.ToString(destination.AccountId) } if destination.Prefix != nil { - m["prefix"] = aws.StringValue(destination.Prefix) + m["prefix"] = aws.ToString(destination.Prefix) } if destination.Encryption != nil { @@ -484,7 +420,7 @@ func flattenInventoryBucketDestination(destination *s3.InventoryS3BucketDestinat } else if destination.Encryption.SSEKMS != nil { encryption["sse_kms"] = []map[string]interface{}{ { - "key_id": aws.StringValue(destination.Encryption.SSEKMS.KeyId), + "key_id": aws.ToString(destination.Encryption.SSEKMS.KeyId), }, } } @@ -505,3 +441,29 @@ func BucketInventoryParseID(id string) (string, string, error) { name := idParts[1] return bucket, name, nil } + +func findInventoryConfiguration(ctx context.Context, conn *s3.Client, bucket, id string) (*types.InventoryConfiguration, error) { + input := &s3.GetBucketInventoryConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(id), + } + + output, err := conn.GetBucketInventoryConfiguration(ctx, input) + + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeNoSuchConfiguration) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.InventoryConfiguration == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.InventoryConfiguration, nil +} diff --git a/internal/service/s3/bucket_inventory_test.go b/internal/service/s3/bucket_inventory_test.go index be0145c733a..e83e0bbc2ce 100644 --- a/internal/service/s3/bucket_inventory_test.go +++ b/internal/service/s3/bucket_inventory_test.go @@ -6,43 +6,38 @@ package s3_test import ( "context" "fmt" - "log" "testing" - "time" "github.com/YakDriver/regexache" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/aws/aws-sdk-go-v2/service/s3/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketInventory_basic(t *testing.T) { ctx := acctest.Context(t) - var conf s3.InventoryConfiguration - rString := sdkacctest.RandString(8) + var conf types.InventoryConfiguration + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_inventory.test" - - bucketName := fmt.Sprintf("tf-acc-bucket-inventory-%s", rString) inventoryName := t.Name() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketInventoryDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketInventoryConfig_basic(bucketName, inventoryName), + Config: testAccBucketInventoryConfig_basic(rName, inventoryName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketInventoryExistsConfig(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "bucket", bucketName), + testAccCheckBucketInventoryExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "bucket", rName), resource.TestCheckResourceAttr(resourceName, "filter.#", "1"), resource.TestCheckResourceAttr(resourceName, "filter.0.prefix", "documents/"), resource.TestCheckResourceAttr(resourceName, "name", inventoryName), @@ -56,7 +51,7 @@ func TestAccS3BucketInventory_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "destination.#", "1"), resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.#", "1"), - acctest.CheckResourceAttrGlobalARNNoAccount(resourceName, "destination.0.bucket.0.bucket_arn", "s3", bucketName), + acctest.CheckResourceAttrGlobalARNNoAccount(resourceName, "destination.0.bucket.0.bucket_arn", "s3", rName), acctest.CheckResourceAttrAccountID(resourceName, "destination.0.bucket.0.account_id"), resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.format", "ORC"), resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.prefix", "inventory"), @@ -73,23 +68,21 @@ func TestAccS3BucketInventory_basic(t *testing.T) { func TestAccS3BucketInventory_encryptWithSSES3(t *testing.T) { ctx := acctest.Context(t) - var conf s3.InventoryConfiguration - rString := sdkacctest.RandString(8) + var conf types.InventoryConfiguration + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_inventory.test" - - bucketName := fmt.Sprintf("tf-acc-bucket-inventory-%s", rString) inventoryName := t.Name() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketInventoryDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketInventoryConfig_encryptSSE(bucketName, inventoryName), + Config: testAccBucketInventoryConfig_encryptSSE(rName, inventoryName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketInventoryExistsConfig(ctx, resourceName, &conf), + testAccCheckBucketInventoryExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.encryption.0.sse_s3.#", "1"), ), }, @@ -104,23 +97,21 @@ func TestAccS3BucketInventory_encryptWithSSES3(t *testing.T) { func TestAccS3BucketInventory_encryptWithSSEKMS(t *testing.T) { ctx := acctest.Context(t) - var conf s3.InventoryConfiguration - rString := sdkacctest.RandString(8) + var conf types.InventoryConfiguration + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_inventory.test" - - bucketName := fmt.Sprintf("tf-acc-bucket-inventory-%s", rString) inventoryName := t.Name() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketInventoryDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketInventoryConfig_encryptSSEKMS(bucketName, inventoryName), + Config: testAccBucketInventoryConfig_encryptSSEKMS(rName, inventoryName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketInventoryExistsConfig(ctx, resourceName, &conf), + testAccCheckBucketInventoryExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.encryption.0.sse_kms.#", "1"), resource.TestMatchResourceAttr(resourceName, "destination.0.bucket.0.encryption.0.sse_kms.0.key_id", regexache.MustCompile(fmt.Sprintf("^arn:%s:kms:", acctest.Partition()))), ), @@ -134,34 +125,27 @@ func TestAccS3BucketInventory_encryptWithSSEKMS(t *testing.T) { }) } -func testAccCheckBucketInventoryExistsConfig(ctx context.Context, n string, res *s3.InventoryConfiguration) resource.TestCheckFunc { +func testAccCheckBucketInventoryExists(ctx context.Context, n string, v *types.InventoryConfiguration) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("No S3 bucket inventory configuration ID is set") - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, name, err := tfs3.BucketInventoryParseID(rs.Primary.ID) if err != nil { return err } - input := &s3.GetBucketInventoryConfigurationInput{ - Bucket: aws.String(bucket), - Id: aws.String(name), - } - log.Printf("[DEBUG] Reading S3 bucket inventory configuration: %s", input) - output, err := conn.GetBucketInventoryConfigurationWithContext(ctx, input) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) + + output, err := tfs3.FindInventoryConfiguration(ctx, conn, bucket, name) + if err != nil { return err } - *res = *output.InventoryConfiguration + *v = *output return nil } @@ -169,7 +153,7 @@ func testAccCheckBucketInventoryExistsConfig(ctx context.Context, n string, res func testAccCheckBucketInventoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_inventory" { @@ -181,42 +165,33 @@ func testAccCheckBucketInventoryDestroy(ctx context.Context) resource.TestCheckF return err } - err = retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - input := &s3.GetBucketInventoryConfigurationInput{ - Bucket: aws.String(bucket), - Id: aws.String(name), - } - log.Printf("[DEBUG] Reading S3 bucket inventory configuration: %s", input) - output, err := conn.GetBucketInventoryConfigurationWithContext(ctx, input) - if err != nil { - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) || tfawserr.ErrMessageContains(err, "NoSuchConfiguration", "The specified configuration does not exist.") { - return nil - } - return retry.NonRetryableError(err) - } - if output.InventoryConfiguration != nil { - return retry.RetryableError(fmt.Errorf("S3 bucket inventory configuration exists: %v", output)) - } - return nil - }) + _, err = tfs3.FindInventoryConfiguration(ctx, conn, bucket, name) + + if tfresource.NotFound(err) { + continue + } + if err != nil { return err } + + return fmt.Errorf("S3 Bucket Inventory %s still exists", rs.Primary.ID) } + return nil } } -func testAccBucketInventoryBucketConfig(name string) string { +func testAccBucketInventoryConfig_base(rName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -`, name) +`, rName) } func testAccBucketInventoryConfig_basic(bucketName, inventoryName string) string { - return testAccBucketInventoryBucketConfig(bucketName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketInventoryConfig_base(bucketName), fmt.Sprintf(` data "aws_caller_identity" "current" {} resource "aws_s3_bucket_inventory" "test" { @@ -247,11 +222,11 @@ resource "aws_s3_bucket_inventory" "test" { } } } -`, inventoryName) +`, inventoryName)) } func testAccBucketInventoryConfig_encryptSSE(bucketName, inventoryName string) string { - return testAccBucketInventoryBucketConfig(bucketName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketInventoryConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_inventory" "test" { bucket = aws_s3_bucket.test.id name = %[1]q @@ -273,13 +248,13 @@ resource "aws_s3_bucket_inventory" "test" { } } } -`, inventoryName) +`, inventoryName)) } func testAccBucketInventoryConfig_encryptSSEKMS(bucketName, inventoryName string) string { - return testAccBucketInventoryBucketConfig(bucketName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketInventoryConfig_base(bucketName), fmt.Sprintf(` resource "aws_kms_key" "test" { - description = "Terraform acc test S3 inventory SSE-KMS encryption: %[1]s" + description = %[1]q deletion_window_in_days = 7 } @@ -306,5 +281,5 @@ resource "aws_s3_bucket_inventory" "test" { } } } -`, bucketName, inventoryName) +`, bucketName, inventoryName)) } diff --git a/internal/service/s3/bucket_logging.go b/internal/service/s3/bucket_logging.go index 8e6532cd473..09c88b69cc2 100644 --- a/internal/service/s3/bucket_logging.go +++ b/internal/service/s3/bucket_logging.go @@ -6,16 +6,17 @@ package s3 import ( "context" "log" - "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -74,9 +75,9 @@ func ResourceBucketLogging() *schema.Resource { Optional: true, }, "type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(s3.Type_Values(), false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.Type](), }, "uri": { Type: schema.TypeString, @@ -86,9 +87,9 @@ func ResourceBucketLogging() *schema.Resource { }, }, "permission": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(s3.BucketLogsPermission_Values(), false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.BucketLogsPermission](), }, }, }, @@ -103,40 +104,39 @@ func ResourceBucketLogging() *schema.Resource { func resourceBucketLoggingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) input := &s3.PutBucketLoggingInput{ Bucket: aws.String(bucket), - BucketLoggingStatus: &s3.BucketLoggingStatus{ - LoggingEnabled: &s3.LoggingEnabled{ + BucketLoggingStatus: &types.BucketLoggingStatus{ + LoggingEnabled: &types.LoggingEnabled{ TargetBucket: aws.String(d.Get("target_bucket").(string)), TargetPrefix: aws.String(d.Get("target_prefix").(string)), }, }, } + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } if v, ok := d.GetOk("target_grant"); ok && v.(*schema.Set).Len() > 0 { input.BucketLoggingStatus.LoggingEnabled.TargetGrants = expandBucketLoggingTargetGrants(v.(*schema.Set).List()) } - if expectedBucketOwner != "" { - input.ExpectedBucketOwner = aws.String(expectedBucketOwner) - } - - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 2*time.Minute, func() (interface{}, error) { - return conn.PutBucketLoggingWithContext(ctx, input) - }, s3.ErrCodeNoSuchBucket) + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return conn.PutBucketLogging(ctx, input) + }, errCodeNoSuchBucket) if err != nil { - return sdkdiag.AppendErrorf(diags, "putting S3 Bucket (%s) Logging: %s", bucket, err) + return sdkdiag.AppendErrorf(diags, "creating S3 Bucket (%s) Logging: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) _, err = tfresource.RetryWhenNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { - return FindBucketLogging(ctx, conn, bucket, expectedBucketOwner) + return findLoggingEnabled(ctx, conn, bucket, expectedBucketOwner) }) if err != nil { @@ -148,14 +148,14 @@ func resourceBucketLoggingCreate(ctx context.Context, d *schema.ResourceData, me func resourceBucketLoggingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { return sdkdiag.AppendFromErr(diags, err) } - loggingEnabled, err := FindBucketLogging(ctx, conn, bucket, expectedBucketOwner) + loggingEnabled, err := findLoggingEnabled(ctx, conn, bucket, expectedBucketOwner) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] S3 Bucket Logging (%s) not found, removing from state", d.Id()) @@ -180,7 +180,7 @@ func resourceBucketLoggingRead(ctx context.Context, d *schema.ResourceData, meta func resourceBucketLoggingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -189,25 +189,22 @@ func resourceBucketLoggingUpdate(ctx context.Context, d *schema.ResourceData, me input := &s3.PutBucketLoggingInput{ Bucket: aws.String(bucket), - BucketLoggingStatus: &s3.BucketLoggingStatus{ - LoggingEnabled: &s3.LoggingEnabled{ + BucketLoggingStatus: &types.BucketLoggingStatus{ + LoggingEnabled: &types.LoggingEnabled{ TargetBucket: aws.String(d.Get("target_bucket").(string)), TargetPrefix: aws.String(d.Get("target_prefix").(string)), }, }, } + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } if v, ok := d.GetOk("target_grant"); ok && v.(*schema.Set).Len() > 0 { input.BucketLoggingStatus.LoggingEnabled.TargetGrants = expandBucketLoggingTargetGrants(v.(*schema.Set).List()) } - if expectedBucketOwner != "" { - input.ExpectedBucketOwner = aws.String(expectedBucketOwner) - } - - _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, 2*time.Minute, func() (interface{}, error) { - return conn.PutBucketLoggingWithContext(ctx, input) - }, s3.ErrCodeNoSuchBucket) + _, err = conn.PutBucketLogging(ctx, input) if err != nil { return sdkdiag.AppendErrorf(diags, "updating S3 Bucket Logging (%s): %s", d.Id(), err) @@ -218,7 +215,7 @@ func resourceBucketLoggingUpdate(ctx context.Context, d *schema.ResourceData, me func resourceBucketLoggingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -227,16 +224,15 @@ func resourceBucketLoggingDelete(ctx context.Context, d *schema.ResourceData, me input := &s3.PutBucketLoggingInput{ Bucket: aws.String(bucket), - BucketLoggingStatus: &s3.BucketLoggingStatus{}, + BucketLoggingStatus: &types.BucketLoggingStatus{}, } - if expectedBucketOwner != "" { input.ExpectedBucketOwner = aws.String(expectedBucketOwner) } - _, err = conn.PutBucketLoggingWithContext(ctx, input) + _, err = conn.PutBucketLogging(ctx, input) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) { return nil } @@ -244,10 +240,12 @@ func resourceBucketLoggingDelete(ctx context.Context, d *schema.ResourceData, me return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Logging (%s): %s", d.Id(), err) } + // Don't wait for the logging to disappear as it still exists after update. + return nil } -func FindBucketLogging(ctx context.Context, conn *s3.S3, bucketName, expectedBucketOwner string) (*s3.LoggingEnabled, error) { +func findLoggingEnabled(ctx context.Context, conn *s3.Client, bucketName, expectedBucketOwner string) (*types.LoggingEnabled, error) { input := &s3.GetBucketLoggingInput{ Bucket: aws.String(bucketName), } @@ -255,9 +253,9 @@ func FindBucketLogging(ctx context.Context, conn *s3.S3, bucketName, expectedBuc input.ExpectedBucketOwner = aws.String(expectedBucketOwner) } - output, err := conn.GetBucketLoggingWithContext(ctx, input) + output, err := conn.GetBucketLogging(ctx, input) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) { return nil, &retry.NotFoundError{ LastError: err, LastRequest: input, @@ -275,8 +273,8 @@ func FindBucketLogging(ctx context.Context, conn *s3.S3, bucketName, expectedBuc return output.LoggingEnabled, nil } -func expandBucketLoggingTargetGrants(l []interface{}) []*s3.TargetGrant { - var grants []*s3.TargetGrant +func expandBucketLoggingTargetGrants(l []interface{}) []types.TargetGrant { + var grants []types.TargetGrant for _, tfMapRaw := range l { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -284,14 +282,14 @@ func expandBucketLoggingTargetGrants(l []interface{}) []*s3.TargetGrant { continue } - grant := &s3.TargetGrant{} + grant := types.TargetGrant{} if v, ok := tfMap["grantee"].([]interface{}); ok && len(v) > 0 && v[0] != nil { grant.Grantee = expandBucketLoggingTargetGrantGrantee(v) } if v, ok := tfMap["permission"].(string); ok && v != "" { - grant.Permission = aws.String(v) + grant.Permission = types.BucketLogsPermission(v) } grants = append(grants, grant) @@ -300,7 +298,7 @@ func expandBucketLoggingTargetGrants(l []interface{}) []*s3.TargetGrant { return grants } -func expandBucketLoggingTargetGrantGrantee(l []interface{}) *s3.Grantee { +func expandBucketLoggingTargetGrantGrantee(l []interface{}) *types.Grantee { if len(l) == 0 || l[0] == nil { return nil } @@ -310,7 +308,7 @@ func expandBucketLoggingTargetGrantGrantee(l []interface{}) *s3.Grantee { return nil } - grantee := &s3.Grantee{} + grantee := &types.Grantee{} if v, ok := tfMap["display_name"].(string); ok && v != "" { grantee.DisplayName = aws.String(v) @@ -325,7 +323,7 @@ func expandBucketLoggingTargetGrantGrantee(l []interface{}) *s3.Grantee { } if v, ok := tfMap["type"].(string); ok && v != "" { - grantee.Type = aws.String(v) + grantee.Type = types.Type(v) } if v, ok := tfMap["uri"].(string); ok && v != "" { @@ -335,55 +333,47 @@ func expandBucketLoggingTargetGrantGrantee(l []interface{}) *s3.Grantee { return grantee } -func flattenBucketLoggingTargetGrants(grants []*s3.TargetGrant) []interface{} { +func flattenBucketLoggingTargetGrants(grants []types.TargetGrant) []interface{} { var results []interface{} for _, grant := range grants { - if grant == nil { - continue + m := map[string]interface{}{ + "permission": grant.Permission, } - m := make(map[string]interface{}) - if grant.Grantee != nil { m["grantee"] = flattenBucketLoggingTargetGrantGrantee(grant.Grantee) } - if grant.Permission != nil { - m["permission"] = aws.StringValue(grant.Permission) - } - results = append(results, m) } return results } -func flattenBucketLoggingTargetGrantGrantee(g *s3.Grantee) []interface{} { +func flattenBucketLoggingTargetGrantGrantee(g *types.Grantee) []interface{} { if g == nil { return []interface{}{} } - m := make(map[string]interface{}) + m := map[string]interface{}{ + "type": g.Type, + } if g.DisplayName != nil { - m["display_name"] = aws.StringValue(g.DisplayName) + m["display_name"] = aws.ToString(g.DisplayName) } if g.EmailAddress != nil { - m["email_address"] = aws.StringValue(g.EmailAddress) + m["email_address"] = aws.ToString(g.EmailAddress) } if g.ID != nil { - m["id"] = aws.StringValue(g.ID) - } - - if g.Type != nil { - m["type"] = aws.StringValue(g.Type) + m["id"] = aws.ToString(g.ID) } if g.URI != nil { - m["uri"] = aws.StringValue(g.URI) + m["uri"] = aws.ToString(g.URI) } return []interface{}{m} diff --git a/internal/service/s3/bucket_logging_test.go b/internal/service/s3/bucket_logging_test.go index a2cce8bc34d..c255f6dd772 100644 --- a/internal/service/s3/bucket_logging_test.go +++ b/internal/service/s3/bucket_logging_test.go @@ -9,7 +9,7 @@ import ( "os" "testing" - "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -17,6 +17,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketLogging_basic(t *testing.T) { @@ -26,7 +27,7 @@ func TestAccS3BucketLogging_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketLoggingDestroy(ctx), Steps: []resource.TestStep{ @@ -57,7 +58,7 @@ func TestAccS3BucketLogging_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketLoggingDestroy(ctx), Steps: []resource.TestStep{ @@ -80,7 +81,7 @@ func TestAccS3BucketLogging_update(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketLoggingDestroy(ctx), Steps: []resource.TestStep{ @@ -117,19 +118,19 @@ func TestAccS3BucketLogging_TargetGrantByID(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketLoggingDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketLoggingConfig_targetGrantByID(rName, s3.BucketLogsPermissionFullControl), + Config: testAccBucketLoggingConfig_targetGrantByID(rName, string(types.BucketLogsPermissionFullControl)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketLoggingExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "target_grant.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "target_grant.*", map[string]string{ "grantee.#": "1", - "grantee.0.type": s3.TypeCanonicalUser, - "permission": s3.BucketLogsPermissionFullControl, + "grantee.0.type": string(types.TypeCanonicalUser), + "permission": string(types.BucketLogsPermissionFullControl), }), resource.TestCheckTypeSetElemAttrPair(resourceName, "target_grant.*.grantee.0.id", "data.aws_canonical_user_id.current", "id"), resource.TestCheckTypeSetElemAttrPair(resourceName, "target_grant.*.grantee.0.display_name", "data.aws_canonical_user_id.current", "display_name"), @@ -141,14 +142,14 @@ func TestAccS3BucketLogging_TargetGrantByID(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBucketLoggingConfig_targetGrantByID(rName, s3.BucketLogsPermissionRead), + Config: testAccBucketLoggingConfig_targetGrantByID(rName, string(types.BucketLogsPermissionRead)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketLoggingExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "target_grant.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "target_grant.*", map[string]string{ "grantee.#": "1", - "grantee.0.type": s3.TypeCanonicalUser, - "permission": s3.BucketLogsPermissionRead, + "grantee.0.type": string(types.TypeCanonicalUser), + "permission": string(types.BucketLogsPermissionRead), }), resource.TestCheckTypeSetElemAttrPair(resourceName, "target_grant.*.grantee.0.display_name", "data.aws_canonical_user_id.current", "display_name"), ), @@ -182,20 +183,20 @@ func TestAccS3BucketLogging_TargetGrantByEmail(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketLoggingDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketLoggingConfig_targetGrantByEmail(rName, rEmail, s3.BucketLogsPermissionFullControl), + Config: testAccBucketLoggingConfig_targetGrantByEmail(rName, rEmail, string(types.BucketLogsPermissionFullControl)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketLoggingExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "target_grant.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "target_grant.*", map[string]string{ "grantee.#": "1", "grantee.0.email_address": rEmail, - "grantee.0.type": s3.TypeAmazonCustomerByEmail, - "permission": s3.BucketLogsPermissionFullControl, + "grantee.0.type": string(types.TypeAmazonCustomerByEmail), + "permission": string(types.BucketLogsPermissionFullControl), }), ), }, @@ -205,15 +206,15 @@ func TestAccS3BucketLogging_TargetGrantByEmail(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBucketLoggingConfig_targetGrantByEmail(rName, rEmail, s3.BucketLogsPermissionRead), + Config: testAccBucketLoggingConfig_targetGrantByEmail(rName, rEmail, string(types.BucketLogsPermissionRead)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketLoggingExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "target_grant.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "target_grant.*", map[string]string{ "grantee.#": "1", "grantee.0.email": rEmail, - "grantee.0.type": s3.TypeAmazonCustomerByEmail, - "permission": s3.BucketLogsPermissionRead, + "grantee.0.type": string(types.TypeAmazonCustomerByEmail), + "permission": string(types.BucketLogsPermissionRead), }), ), }, @@ -240,19 +241,19 @@ func TestAccS3BucketLogging_TargetGrantByGroup(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketLoggingDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketLoggingConfig_targetGrantByGroup(rName, s3.BucketLogsPermissionFullControl), + Config: testAccBucketLoggingConfig_targetGrantByGroup(rName, string(types.BucketLogsPermissionFullControl)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketLoggingExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "target_grant.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "target_grant.*", map[string]string{ "grantee.#": "1", - "grantee.0.type": s3.TypeGroup, - "permission": s3.BucketLogsPermissionFullControl, + "grantee.0.type": string(types.TypeGroup), + "permission": string(types.BucketLogsPermissionFullControl), }), testAccCheckBucketLoggingTargetGrantGranteeURI(resourceName), ), @@ -263,14 +264,14 @@ func TestAccS3BucketLogging_TargetGrantByGroup(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBucketLoggingConfig_targetGrantByGroup(rName, s3.BucketLogsPermissionRead), + Config: testAccBucketLoggingConfig_targetGrantByGroup(rName, string(types.BucketLogsPermissionRead)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketLoggingExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "target_grant.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "target_grant.*", map[string]string{ "grantee.#": "1", - "grantee.0.type": s3.TypeGroup, - "permission": s3.BucketLogsPermissionRead, + "grantee.0.type": string(types.TypeGroup), + "permission": string(types.BucketLogsPermissionRead), }), testAccCheckBucketLoggingTargetGrantGranteeURI(resourceName), ), @@ -299,7 +300,7 @@ func TestAccS3BucketLogging_migrate_loggingNoChange(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketDestroy(ctx), Steps: []resource.TestStep{ @@ -332,7 +333,7 @@ func TestAccS3BucketLogging_migrate_loggingWithChange(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketDestroy(ctx), Steps: []resource.TestStep{ @@ -364,7 +365,7 @@ func TestAccS3BucketLogging_withExpectedBucketOwner(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketLoggingDestroy(ctx), Steps: []resource.TestStep{ @@ -390,7 +391,7 @@ func TestAccS3BucketLogging_withExpectedBucketOwner(t *testing.T) { func testAccCheckBucketLoggingDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_logging" { @@ -402,7 +403,7 @@ func testAccCheckBucketLoggingDestroy(ctx context.Context) resource.TestCheckFun return err } - _, err = tfs3.FindBucketLogging(ctx, conn, bucket, expectedBucketOwner) + _, err = tfs3.FindLoggingEnabled(ctx, conn, bucket, expectedBucketOwner) if tfresource.NotFound(err) { continue @@ -431,9 +432,9 @@ func testAccCheckBucketLoggingExists(ctx context.Context, n string) resource.Tes return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) - _, err = tfs3.FindBucketLogging(ctx, conn, bucket, expectedBucketOwner) + _, err = tfs3.FindLoggingEnabled(ctx, conn, bucket, expectedBucketOwner) return err } diff --git a/internal/service/s3/bucket_metric.go b/internal/service/s3/bucket_metric.go index 11640a96c90..4af96a792ff 100644 --- a/internal/service/s3/bucket_metric.go +++ b/internal/service/s3/bucket_metric.go @@ -9,9 +9,10 @@ import ( "log" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -29,6 +30,7 @@ func ResourceBucketMetric() *schema.Resource { ReadWithoutTimeout: resourceBucketMetricRead, UpdateWithoutTimeout: resourceBucketMetricPut, DeleteWithoutTimeout: resourceBucketMetricDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, @@ -71,182 +73,173 @@ func ResourceBucketMetric() *schema.Resource { func resourceBucketMetricPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) - bucket := d.Get("bucket").(string) - name := d.Get("name").(string) + conn := meta.(*conns.AWSClient).S3Client(ctx) - metricsConfiguration := &s3.MetricsConfiguration{ + name := d.Get("name").(string) + metricsConfiguration := &types.MetricsConfiguration{ Id: aws.String(name), } if v, ok := d.GetOk("filter"); ok { - filterList := v.([]interface{}) - if filterMap, ok := filterList[0].(map[string]interface{}); ok { - metricsConfiguration.Filter = ExpandMetricsFilter(ctx, filterMap) + if tfMap, ok := v.([]interface{})[0].(map[string]interface{}); ok { + metricsConfiguration.Filter = expandMetricsFilter(ctx, tfMap) } } + bucket := d.Get("bucket").(string) input := &s3.PutBucketMetricsConfigurationInput{ Bucket: aws.String(bucket), Id: aws.String(name), MetricsConfiguration: metricsConfiguration, } - log.Printf("[DEBUG] Putting S3 Bucket Metrics Configuration: %s", input) - err := retry.RetryContext(ctx, s3BucketPropagationTimeout, func() *retry.RetryError { - _, err := conn.PutBucketMetricsConfigurationWithContext(ctx, input) + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return conn.PutBucketMetricsConfiguration(ctx, input) + }, errCodeNoSuchBucket) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return retry.RetryableError(err) - } - - if err != nil { - return retry.NonRetryableError(err) - } + if err != nil { + return sdkdiag.AppendErrorf(diags, "creating S3 Bucket (%s) Metric: %s", bucket, err) + } - return nil - }) + if d.IsNewResource() { + d.SetId(fmt.Sprintf("%s:%s", bucket, name)) - if tfresource.TimedOut(err) { - _, err = conn.PutBucketMetricsConfigurationWithContext(ctx, input) - } + _, err = tfresource.RetryWhenNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findMetricsConfiguration(ctx, conn, bucket, name) + }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "putting S3 Bucket Metrics Configuration: %s", err) + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Metric (%s) create: %s", d.Id(), err) + } } - d.SetId(fmt.Sprintf("%s:%s", bucket, name)) - return append(diags, resourceBucketMetricRead(ctx, d, meta)...) } -func resourceBucketMetricDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceBucketMetricRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, name, err := BucketMetricParseID(d.Id()) if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Metrics Configuration (%s): %s", d.Id(), err) - } - - input := &s3.DeleteBucketMetricsConfigurationInput{ - Bucket: aws.String(bucket), - Id: aws.String(name), + return sdkdiag.AppendFromErr(diags, err) } - log.Printf("[DEBUG] Deleting S3 Bucket Metrics Configuration: %s", input) - _, err = conn.DeleteBucketMetricsConfigurationWithContext(ctx, input) + mc, err := findMetricsConfiguration(ctx, conn, bucket, name) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return diags + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] S3 Bucket Metric (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil } - if tfawserr.ErrCodeEquals(err, errCodeNoSuchConfiguration) { - return diags + if err != nil { + return diag.Errorf("reading S3 Bucket Metric (%s): %s", d.Id(), err) } - if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Metrics Configuration (%s): %s", d.Id(), err) + d.Set("bucket", bucket) + if mc.Filter != nil { + if err := d.Set("filter", []interface{}{flattenMetricsFilter(ctx, mc.Filter)}); err != nil { + return sdkdiag.AppendErrorf(diags, "setting filter") + } } + d.Set("name", name) return diags } -func resourceBucketMetricRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceBucketMetricDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, name, err := BucketMetricParseID(d.Id()) if err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Metrics Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } - d.Set("bucket", bucket) - d.Set("name", name) - - input := &s3.GetBucketMetricsConfigurationInput{ + log.Printf("[DEBUG] Deleting S3 Bucket Metric: %s", d.Id()) + _, err = conn.DeleteBucketMetricsConfiguration(ctx, &s3.DeleteBucketMetricsConfigurationInput{ Bucket: aws.String(bucket), Id: aws.String(name), - } - - log.Printf("[DEBUG] Reading S3 Bucket Metrics Configuration: %s", input) - output, err := conn.GetBucketMetricsConfigurationWithContext(ctx, input) - - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - log.Printf("[WARN] S3 Bucket Metrics Configuration (%s) not found, removing from state", d.Id()) - d.SetId("") - return diags - } + }) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, errCodeNoSuchConfiguration) { - log.Printf("[WARN] S3 Bucket Metrics Configuration (%s) not found, removing from state", d.Id()) - d.SetId("") + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeNoSuchConfiguration) { return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Metrics Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Metric (%s): %s", d.Id(), err) } - if output == nil || output.MetricsConfiguration == nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Metrics Configuration (%s): empty response", d.Id()) - } + _, err = tfresource.RetryUntilNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findMetricsConfiguration(ctx, conn, bucket, name) + }) - if output.MetricsConfiguration.Filter != nil { - if err := d.Set("filter", []interface{}{FlattenMetricsFilter(ctx, output.MetricsConfiguration.Filter)}); err != nil { - return sdkdiag.AppendErrorf(diags, "setting filter") - } + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Metric (%s) delete: %s", d.Id(), err) } return diags } -func ExpandMetricsFilter(ctx context.Context, m map[string]interface{}) *s3.MetricsFilter { +func expandMetricsFilter(ctx context.Context, m map[string]interface{}) types.MetricsFilter { var prefix string if v, ok := m["prefix"]; ok { prefix = v.(string) } - var tags []*s3.Tag + var tags []types.Tag if v, ok := m["tags"]; ok { - tags = Tags(tftags.New(ctx, v).IgnoreAWS()) + tags = tagsV2(tftags.New(ctx, v).IgnoreAWS()) } - metricsFilter := &s3.MetricsFilter{} + var metricsFilter types.MetricsFilter + if prefix != "" && len(tags) > 0 { - metricsFilter.And = &s3.MetricsAndOperator{ - Prefix: aws.String(prefix), - Tags: tags, + metricsFilter = &types.MetricsFilterMemberAnd{ + Value: types.MetricsAndOperator{ + Prefix: aws.String(prefix), + Tags: tags, + }, } } else if len(tags) > 1 { - metricsFilter.And = &s3.MetricsAndOperator{ - Tags: tags, + metricsFilter = &types.MetricsFilterMemberAnd{ + Value: types.MetricsAndOperator{ + Tags: tags, + }, } } else if len(tags) == 1 { - metricsFilter.Tag = tags[0] + metricsFilter = &types.MetricsFilterMemberTag{ + Value: tags[0], + } } else { - metricsFilter.Prefix = aws.String(prefix) + metricsFilter = &types.MetricsFilterMemberPrefix{ + Value: prefix, + } } return metricsFilter } -func FlattenMetricsFilter(ctx context.Context, metricsFilter *s3.MetricsFilter) map[string]interface{} { +func flattenMetricsFilter(ctx context.Context, metricsFilter types.MetricsFilter) map[string]interface{} { m := make(map[string]interface{}) - if and := metricsFilter.And; and != nil { - if and.Prefix != nil { - m["prefix"] = aws.StringValue(and.Prefix) + switch v := metricsFilter.(type) { + case *types.MetricsFilterMemberAnd: + if v := v.Value.Prefix; v != nil { + m["prefix"] = aws.ToString(v) } - if and.Tags != nil { - m["tags"] = KeyValueTags(ctx, and.Tags).IgnoreAWS().Map() + if v := v.Value.Tags; v != nil { + m["tags"] = keyValueTagsV2(ctx, v).IgnoreAWS().Map() } - } else if metricsFilter.Prefix != nil { - m["prefix"] = aws.StringValue(metricsFilter.Prefix) - } else if metricsFilter.Tag != nil { - tags := []*s3.Tag{ - metricsFilter.Tag, + case *types.MetricsFilterMemberPrefix: + m["prefix"] = v.Value + case *types.MetricsFilterMemberTag: + tags := []types.Tag{ + v.Value, } - m["tags"] = KeyValueTags(ctx, tags).IgnoreAWS().Map() + m["tags"] = keyValueTagsV2(ctx, tags).IgnoreAWS().Map() + default: + return nil } return m } @@ -260,3 +253,29 @@ func BucketMetricParseID(id string) (string, string, error) { name := idParts[1] return bucket, name, nil } + +func findMetricsConfiguration(ctx context.Context, conn *s3.Client, bucket, id string) (*types.MetricsConfiguration, error) { + input := &s3.GetBucketMetricsConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(id), + } + + output, err := conn.GetBucketMetricsConfiguration(ctx, input) + + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeNoSuchConfiguration) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.MetricsConfiguration == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.MetricsConfiguration, nil +} diff --git a/internal/service/s3/bucket_metric_test.go b/internal/service/s3/bucket_metric_test.go index 263077352b1..161d1a0a120 100644 --- a/internal/service/s3/bucket_metric_test.go +++ b/internal/service/s3/bucket_metric_test.go @@ -6,298 +6,38 @@ package s3_test import ( "context" "fmt" - "log" - "reflect" - "sort" "testing" - "time" "github.com/YakDriver/regexache" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/aws/aws-sdk-go-v2/service/s3/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) -func TestExpandMetricsFilter(t *testing.T) { - t.Parallel() - - ctx := context.Background() - testCases := []struct { - Config map[string]interface{} - ExpectedS3MetricsFilter *s3.MetricsFilter - }{ - { - Config: map[string]interface{}{ - "prefix": "prefix/", - }, - ExpectedS3MetricsFilter: &s3.MetricsFilter{ - Prefix: aws.String("prefix/"), - }, - }, - { - Config: map[string]interface{}{ - "prefix": "prefix/", - "tags": map[string]interface{}{ - "tag1key": "tag1value", - }, - }, - ExpectedS3MetricsFilter: &s3.MetricsFilter{ - And: &s3.MetricsAndOperator{ - Prefix: aws.String("prefix/"), - Tags: []*s3.Tag{ - { - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - }, - }, - }, - }, - { - Config: map[string]interface{}{ - "prefix": "prefix/", - "tags": map[string]interface{}{ - "tag1key": "tag1value", - "tag2key": "tag2value", - }, - }, - ExpectedS3MetricsFilter: &s3.MetricsFilter{ - And: &s3.MetricsAndOperator{ - Prefix: aws.String("prefix/"), - Tags: []*s3.Tag{ - { - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - { - Key: aws.String("tag2key"), - Value: aws.String("tag2value"), - }, - }, - }, - }, - }, - { - Config: map[string]interface{}{ - "tags": map[string]interface{}{ - "tag1key": "tag1value", - }, - }, - ExpectedS3MetricsFilter: &s3.MetricsFilter{ - Tag: &s3.Tag{ - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - }, - }, - { - Config: map[string]interface{}{ - "tags": map[string]interface{}{ - "tag1key": "tag1value", - "tag2key": "tag2value", - }, - }, - ExpectedS3MetricsFilter: &s3.MetricsFilter{ - And: &s3.MetricsAndOperator{ - Tags: []*s3.Tag{ - { - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - { - Key: aws.String("tag2key"), - Value: aws.String("tag2value"), - }, - }, - }, - }, - }, - } - - for i, tc := range testCases { - value := tfs3.ExpandMetricsFilter(ctx, tc.Config) - - // Sort tags by key for consistency - if value.And != nil && value.And.Tags != nil { - sort.Slice(value.And.Tags, func(i, j int) bool { - return *value.And.Tags[i].Key < *value.And.Tags[j].Key - }) - } - - // Convert to strings to avoid dealing with pointers - valueS := fmt.Sprintf("%v", value) - expectedValueS := fmt.Sprintf("%v", tc.ExpectedS3MetricsFilter) - - if valueS != expectedValueS { - t.Fatalf("Case #%d: Given:\n%s\n\nExpected:\n%s", i, valueS, expectedValueS) - } - } -} - -func TestFlattenMetricsFilter(t *testing.T) { - t.Parallel() - - ctx := context.Background() - testCases := []struct { - S3MetricsFilter *s3.MetricsFilter - ExpectedConfig map[string]interface{} - }{ - { - S3MetricsFilter: &s3.MetricsFilter{ - Prefix: aws.String("prefix/"), - }, - ExpectedConfig: map[string]interface{}{ - "prefix": "prefix/", - }, - }, - { - S3MetricsFilter: &s3.MetricsFilter{ - And: &s3.MetricsAndOperator{ - Prefix: aws.String("prefix/"), - Tags: []*s3.Tag{ - { - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - }, - }, - }, - ExpectedConfig: map[string]interface{}{ - "prefix": "prefix/", - "tags": map[string]string{ - "tag1key": "tag1value", - }, - }, - }, - { - S3MetricsFilter: &s3.MetricsFilter{ - And: &s3.MetricsAndOperator{ - Prefix: aws.String("prefix/"), - Tags: []*s3.Tag{ - { - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - { - Key: aws.String("tag2key"), - Value: aws.String("tag2value"), - }, - }, - }, - }, - ExpectedConfig: map[string]interface{}{ - "prefix": "prefix/", - "tags": map[string]string{ - "tag1key": "tag1value", - "tag2key": "tag2value", - }, - }, - }, - { - S3MetricsFilter: &s3.MetricsFilter{ - Tag: &s3.Tag{ - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - }, - ExpectedConfig: map[string]interface{}{ - "tags": map[string]string{ - "tag1key": "tag1value", - }, - }, - }, - { - S3MetricsFilter: &s3.MetricsFilter{ - And: &s3.MetricsAndOperator{ - Tags: []*s3.Tag{ - { - Key: aws.String("tag1key"), - Value: aws.String("tag1value"), - }, - { - Key: aws.String("tag2key"), - Value: aws.String("tag2value"), - }, - }, - }, - }, - ExpectedConfig: map[string]interface{}{ - "tags": map[string]string{ - "tag1key": "tag1value", - "tag2key": "tag2value", - }, - }, - }, - } - - for i, tc := range testCases { - value := tfs3.FlattenMetricsFilter(ctx, tc.S3MetricsFilter) - - if !reflect.DeepEqual(value, tc.ExpectedConfig) { - t.Fatalf("Case #%d: Given:\n%s\n\nExpected:\n%s", i, value, tc.ExpectedConfig) - } - } -} - -func TestBucketMetricParseID(t *testing.T) { - t.Parallel() - - validIds := []string{ - "foo:bar", - "my-bucket:entire-bucket", - } - - for _, s := range validIds { - _, _, err := tfs3.BucketMetricParseID(s) - if err != nil { - t.Fatalf("%s should be a valid S3 bucket metrics configuration id: %s", s, err) - } - } - - invalidIds := []string{ - "", - "foo", - "foo:bar:", - "foo:bar:baz", - "foo::bar", - "foo.bar", - } - - for _, s := range invalidIds { - _, _, err := tfs3.BucketMetricParseID(s) - if err == nil { - t.Fatalf("%s should not be a valid S3 bucket metrics configuration id", s) - } - } -} - func TestAccS3BucketMetric_basic(t *testing.T) { ctx := acctest.Context(t) - var conf s3.MetricsConfiguration - rInt := sdkacctest.RandInt() + var conf types.MetricsConfiguration + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_metric.test" - - bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketMetricDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketMetricConfig_noFilter(bucketName, metricName), + Config: testAccBucketMetricConfig_noFilter(rName, metricName), Check: resource.ComposeTestCheckFunc( testAccCheckBucketMetricsExistsConfig(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "bucket", bucketName), + resource.TestCheckResourceAttr(resourceName, "bucket", rName), resource.TestCheckResourceAttr(resourceName, "filter.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", metricName), ), @@ -315,21 +55,19 @@ func TestAccS3BucketMetric_basic(t *testing.T) { // Disallow Empty filter block func TestAccS3BucketMetric_withEmptyFilter(t *testing.T) { ctx := acctest.Context(t) - var conf s3.MetricsConfiguration - rInt := sdkacctest.RandInt() + var conf types.MetricsConfiguration + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_metric.test" - - bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketMetricDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketMetricConfig_emptyFilter(bucketName, metricName), + Config: testAccBucketMetricConfig_emptyFilter(rName, metricName), Check: resource.ComposeTestCheckFunc( testAccCheckBucketMetricsExistsConfig(ctx, resourceName, &conf), ), @@ -341,10 +79,9 @@ func TestAccS3BucketMetric_withEmptyFilter(t *testing.T) { func TestAccS3BucketMetric_withFilterPrefix(t *testing.T) { ctx := acctest.Context(t) - var conf s3.MetricsConfiguration + var conf types.MetricsConfiguration rInt := sdkacctest.RandInt() resourceName := "aws_s3_bucket_metric.test" - bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() prefix := fmt.Sprintf("prefix-%d/", rInt) @@ -352,7 +89,7 @@ func TestAccS3BucketMetric_withFilterPrefix(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketMetricDestroy(ctx), Steps: []resource.TestStep{ @@ -385,10 +122,9 @@ func TestAccS3BucketMetric_withFilterPrefix(t *testing.T) { func TestAccS3BucketMetric_withFilterPrefixAndMultipleTags(t *testing.T) { ctx := acctest.Context(t) - var conf s3.MetricsConfiguration + var conf types.MetricsConfiguration rInt := sdkacctest.RandInt() resourceName := "aws_s3_bucket_metric.test" - bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() prefix := fmt.Sprintf("prefix-%d/", rInt) @@ -400,7 +136,7 @@ func TestAccS3BucketMetric_withFilterPrefixAndMultipleTags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketMetricDestroy(ctx), Steps: []resource.TestStep{ @@ -437,10 +173,9 @@ func TestAccS3BucketMetric_withFilterPrefixAndMultipleTags(t *testing.T) { func TestAccS3BucketMetric_withFilterPrefixAndSingleTag(t *testing.T) { ctx := acctest.Context(t) - var conf s3.MetricsConfiguration + var conf types.MetricsConfiguration rInt := sdkacctest.RandInt() resourceName := "aws_s3_bucket_metric.test" - bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() prefix := fmt.Sprintf("prefix-%d/", rInt) @@ -450,7 +185,7 @@ func TestAccS3BucketMetric_withFilterPrefixAndSingleTag(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketMetricDestroy(ctx), Steps: []resource.TestStep{ @@ -485,10 +220,9 @@ func TestAccS3BucketMetric_withFilterPrefixAndSingleTag(t *testing.T) { func TestAccS3BucketMetric_withFilterMultipleTags(t *testing.T) { ctx := acctest.Context(t) - var conf s3.MetricsConfiguration + var conf types.MetricsConfiguration rInt := sdkacctest.RandInt() resourceName := "aws_s3_bucket_metric.test" - bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() tag1 := fmt.Sprintf("tag1-%d", rInt) @@ -498,7 +232,7 @@ func TestAccS3BucketMetric_withFilterMultipleTags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketMetricDestroy(ctx), Steps: []resource.TestStep{ @@ -535,10 +269,9 @@ func TestAccS3BucketMetric_withFilterMultipleTags(t *testing.T) { func TestAccS3BucketMetric_withFilterSingleTag(t *testing.T) { ctx := acctest.Context(t) - var conf s3.MetricsConfiguration + var conf types.MetricsConfiguration rInt := sdkacctest.RandInt() resourceName := "aws_s3_bucket_metric.test" - bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() tag1 := fmt.Sprintf("tag-%d", rInt) @@ -546,7 +279,7 @@ func TestAccS3BucketMetric_withFilterSingleTag(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketMetricDestroy(ctx), Steps: []resource.TestStep{ @@ -581,7 +314,7 @@ func TestAccS3BucketMetric_withFilterSingleTag(t *testing.T) { func testAccCheckBucketMetricDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_metric" { @@ -593,68 +326,50 @@ func testAccCheckBucketMetricDestroy(ctx context.Context) resource.TestCheckFunc return err } - err = retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - input := &s3.GetBucketMetricsConfigurationInput{ - Bucket: aws.String(bucket), - Id: aws.String(name), - } - log.Printf("[DEBUG] Reading S3 bucket metrics configuration: %s", input) - output, err := conn.GetBucketMetricsConfigurationWithContext(ctx, input) - if err != nil { - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) || tfawserr.ErrMessageContains(err, "NoSuchConfiguration", "The specified configuration does not exist.") { - return nil - } - return retry.NonRetryableError(err) - } - if output.MetricsConfiguration != nil { - return retry.RetryableError(fmt.Errorf("S3 bucket metrics configuration exists: %v", output)) - } - - return nil - }) + _, err = tfs3.FindMetricsConfiguration(ctx, conn, bucket, name) + + if tfresource.NotFound(err) { + continue + } if err != nil { return err } + + return fmt.Errorf("S3 Bucket Metric %s still exists", rs.Primary.ID) } + return nil } } -func testAccCheckBucketMetricsExistsConfig(ctx context.Context, n string, res *s3.MetricsConfiguration) resource.TestCheckFunc { +func testAccCheckBucketMetricsExistsConfig(ctx context.Context, n string, v *types.MetricsConfiguration) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("No S3 bucket metrics configuration ID is set") - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, name, err := tfs3.BucketMetricParseID(rs.Primary.ID) if err != nil { return err } - input := &s3.GetBucketMetricsConfigurationInput{ - Bucket: aws.String(bucket), - Id: aws.String(name), - } - log.Printf("[DEBUG] Reading S3 bucket metrics configuration: %s", input) - output, err := conn.GetBucketMetricsConfigurationWithContext(ctx, input) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) + + output, err := tfs3.FindMetricsConfiguration(ctx, conn, bucket, name) + if err != nil { return err } - *res = *output.MetricsConfiguration + *v = *output return nil } } -func testAccBucketMetricsBucketConfig(bucketName string) string { +func testAccBucketMetricConfig_base(bucketName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "bucket" { bucket = %[1]q @@ -663,9 +378,7 @@ resource "aws_s3_bucket" "bucket" { } func testAccBucketMetricConfig_emptyFilter(bucketName, metricName string) string { - return acctest.ConfigCompose( - testAccBucketMetricsBucketConfig(bucketName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketMetricConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_metric" "test" { bucket = aws_s3_bucket.bucket.id name = %[1]q @@ -676,9 +389,7 @@ resource "aws_s3_bucket_metric" "test" { } func testAccBucketMetricConfig_filterPrefix(bucketName, metricName, prefix string) string { - return acctest.ConfigCompose( - testAccBucketMetricsBucketConfig(bucketName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketMetricConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_metric" "test" { bucket = aws_s3_bucket.bucket.id name = %[1]q @@ -691,9 +402,7 @@ resource "aws_s3_bucket_metric" "test" { } func testAccBucketMetricConfig_filterPrefixAndMultipleTags(bucketName, metricName, prefix, tag1, tag2 string) string { - return acctest.ConfigCompose( - testAccBucketMetricsBucketConfig(bucketName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketMetricConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_metric" "test" { bucket = aws_s3_bucket.bucket.id name = %[1]q @@ -702,8 +411,8 @@ resource "aws_s3_bucket_metric" "test" { prefix = %[2]q tags = { - "tag1" = "%s" - "tag2" = "%s" + "tag1" = %[3]q + "tag2" = %[4]q } } } @@ -711,9 +420,7 @@ resource "aws_s3_bucket_metric" "test" { } func testAccBucketMetricConfig_filterPrefixAndSingleTag(bucketName, metricName, prefix, tag string) string { - return acctest.ConfigCompose( - testAccBucketMetricsBucketConfig(bucketName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketMetricConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_metric" "test" { bucket = aws_s3_bucket.bucket.id name = %[1]q @@ -730,9 +437,7 @@ resource "aws_s3_bucket_metric" "test" { } func testAccBucketMetricConfig_filterMultipleTags(bucketName, metricName, tag1, tag2 string) string { - return acctest.ConfigCompose( - testAccBucketMetricsBucketConfig(bucketName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketMetricConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_metric" "test" { bucket = aws_s3_bucket.bucket.id name = %[1]q @@ -748,9 +453,7 @@ resource "aws_s3_bucket_metric" "test" { } func testAccBucketMetricConfig_filterSingleTag(bucketName, metricName, tag string) string { - return acctest.ConfigCompose( - testAccBucketMetricsBucketConfig(bucketName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketMetricConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_metric" "test" { bucket = aws_s3_bucket.bucket.id name = %[1]q @@ -765,9 +468,7 @@ resource "aws_s3_bucket_metric" "test" { } func testAccBucketMetricConfig_noFilter(bucketName, metricName string) string { - return acctest.ConfigCompose( - testAccBucketMetricsBucketConfig(bucketName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccBucketMetricConfig_base(bucketName), fmt.Sprintf(` resource "aws_s3_bucket_metric" "test" { bucket = aws_s3_bucket.bucket.id name = %[1]q diff --git a/internal/service/s3/bucket_notification.go b/internal/service/s3/bucket_notification.go index 10c0ad3fcd4..e3caf0e5859 100644 --- a/internal/service/s3/bucket_notification.go +++ b/internal/service/s3/bucket_notification.go @@ -9,9 +9,10 @@ import ( "log" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" @@ -29,6 +30,7 @@ func ResourceBucketNotification() *schema.Resource { ReadWithoutTimeout: resourceBucketNotificationRead, UpdateWithoutTimeout: resourceBucketNotificationPut, DeleteWithoutTimeout: resourceBucketNotificationDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, @@ -39,22 +41,20 @@ func ResourceBucketNotification() *schema.Resource { Required: true, ForceNew: true, }, - "eventbridge": { Type: schema.TypeBool, Optional: true, Default: false, }, - - "topic": { + "lambda_function": { Type: schema.TypeList, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "id": { - Type: schema.TypeString, - Optional: true, - Computed: true, + "events": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "filter_prefix": { Type: schema.TypeString, @@ -64,29 +64,27 @@ func ResourceBucketNotification() *schema.Resource { Type: schema.TypeString, Optional: true, }, - "topic_arn": { + "id": { Type: schema.TypeString, - Required: true, + Optional: true, + Computed: true, }, - "events": { - Type: schema.TypeSet, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, + "lambda_function_arn": { + Type: schema.TypeString, + Optional: true, }, }, }, }, - "queue": { Type: schema.TypeList, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "id": { - Type: schema.TypeString, - Optional: true, - Computed: true, + "events": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "filter_prefix": { Type: schema.TypeString, @@ -96,29 +94,27 @@ func ResourceBucketNotification() *schema.Resource { Type: schema.TypeString, Optional: true, }, - "queue_arn": { + "id": { Type: schema.TypeString, - Required: true, + Optional: true, + Computed: true, }, - "events": { - Type: schema.TypeSet, + "queue_arn": { + Type: schema.TypeString, Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, }, }, }, }, - - "lambda_function": { + "topic": { Type: schema.TypeList, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "id": { - Type: schema.TypeString, - Optional: true, - Computed: true, + "events": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "filter_prefix": { Type: schema.TypeString, @@ -128,15 +124,14 @@ func ResourceBucketNotification() *schema.Resource { Type: schema.TypeString, Optional: true, }, - "lambda_function_arn": { + "id": { Type: schema.TypeString, Optional: true, + Computed: true, }, - "events": { - Type: schema.TypeSet, + "topic_arn": { + Type: schema.TypeString, Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, }, }, }, @@ -147,115 +142,95 @@ func ResourceBucketNotification() *schema.Resource { func resourceBucketNotificationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) - // EventBridge - eventbridgeNotifications := d.Get("eventbridge").(bool) - var eventbridgeConfig *s3.EventBridgeConfiguration - if eventbridgeNotifications { - eventbridgeConfig = &s3.EventBridgeConfiguration{} + var eventbridgeConfig *types.EventBridgeConfiguration + if d.Get("eventbridge").(bool) { + eventbridgeConfig = &types.EventBridgeConfiguration{} } - // TopicNotifications - topicNotifications := d.Get("topic").([]interface{}) - topicConfigs := make([]*s3.TopicConfiguration, 0, len(topicNotifications)) - for i, c := range topicNotifications { - tc := &s3.TopicConfiguration{} + lambdaFunctionNotifications := d.Get("lambda_function").([]interface{}) + lambdaConfigs := make([]types.LambdaFunctionConfiguration, 0, len(lambdaFunctionNotifications)) + for i, c := range lambdaFunctionNotifications { + lc := types.LambdaFunctionConfiguration{} c := c.(map[string]interface{}) - // Id if val, ok := c["id"].(string); ok && val != "" { - tc.Id = aws.String(val) + lc.Id = aws.String(val) } else { - tc.Id = aws.String(id.PrefixedUniqueId("tf-s3-topic-")) + lc.Id = aws.String(id.PrefixedUniqueId("tf-s3-lambda-")) } - // TopicArn - if val, ok := c["topic_arn"].(string); ok { - tc.TopicArn = aws.String(val) + if val, ok := c["lambda_function_arn"].(string); ok { + lc.LambdaFunctionArn = aws.String(val) } - // Events - events := d.Get(fmt.Sprintf("topic.%d.events", i)).(*schema.Set).List() - tc.Events = make([]*string, 0, len(events)) - for _, e := range events { - tc.Events = append(tc.Events, aws.String(e.(string))) - } + lc.Events = flex.ExpandStringyValueSet[types.Event](d.Get(fmt.Sprintf("lambda_function.%d.events", i)).(*schema.Set)) - // Filter - filterRules := make([]*s3.FilterRule, 0, filterRulesSliceStartLen) + filterRules := make([]types.FilterRule, 0, filterRulesSliceStartLen) if val, ok := c["filter_prefix"].(string); ok && val != "" { - filterRule := &s3.FilterRule{ - Name: aws.String("prefix"), + filterRule := types.FilterRule{ + Name: types.FilterRuleNamePrefix, Value: aws.String(val), } filterRules = append(filterRules, filterRule) } if val, ok := c["filter_suffix"].(string); ok && val != "" { - filterRule := &s3.FilterRule{ - Name: aws.String("suffix"), + filterRule := types.FilterRule{ + Name: types.FilterRuleNameSuffix, Value: aws.String(val), } filterRules = append(filterRules, filterRule) } if len(filterRules) > 0 { - tc.Filter = &s3.NotificationConfigurationFilter{ - Key: &s3.KeyFilter{ + lc.Filter = &types.NotificationConfigurationFilter{ + Key: &types.S3KeyFilter{ FilterRules: filterRules, }, } } - topicConfigs = append(topicConfigs, tc) + lambdaConfigs = append(lambdaConfigs, lc) } - // SQS queueNotifications := d.Get("queue").([]interface{}) - queueConfigs := make([]*s3.QueueConfiguration, 0, len(queueNotifications)) + queueConfigs := make([]types.QueueConfiguration, 0, len(queueNotifications)) for i, c := range queueNotifications { - qc := &s3.QueueConfiguration{} + qc := types.QueueConfiguration{} c := c.(map[string]interface{}) - // Id if val, ok := c["id"].(string); ok && val != "" { qc.Id = aws.String(val) } else { qc.Id = aws.String(id.PrefixedUniqueId("tf-s3-queue-")) } - // QueueArn if val, ok := c["queue_arn"].(string); ok { qc.QueueArn = aws.String(val) } - // Events - events := d.Get(fmt.Sprintf("queue.%d.events", i)).(*schema.Set).List() - qc.Events = make([]*string, 0, len(events)) - for _, e := range events { - qc.Events = append(qc.Events, aws.String(e.(string))) - } + qc.Events = flex.ExpandStringyValueSet[types.Event](d.Get(fmt.Sprintf("queue.%d.events", i)).(*schema.Set)) - // Filter - filterRules := make([]*s3.FilterRule, 0, filterRulesSliceStartLen) + filterRules := make([]types.FilterRule, 0, filterRulesSliceStartLen) if val, ok := c["filter_prefix"].(string); ok && val != "" { - filterRule := &s3.FilterRule{ - Name: aws.String("prefix"), + filterRule := types.FilterRule{ + Name: types.FilterRuleNamePrefix, Value: aws.String(val), } filterRules = append(filterRules, filterRule) } if val, ok := c["filter_suffix"].(string); ok && val != "" { - filterRule := &s3.FilterRule{ - Name: aws.String("suffix"), + filterRule := types.FilterRule{ + Name: types.FilterRuleNameSuffix, Value: aws.String(val), } filterRules = append(filterRules, filterRule) } if len(filterRules) > 0 { - qc.Filter = &s3.NotificationConfigurationFilter{ - Key: &s3.KeyFilter{ + qc.Filter = &types.NotificationConfigurationFilter{ + Key: &types.S3KeyFilter{ FilterRules: filterRules, }, } @@ -263,60 +238,51 @@ func resourceBucketNotificationPut(ctx context.Context, d *schema.ResourceData, queueConfigs = append(queueConfigs, qc) } - // Lambda - lambdaFunctionNotifications := d.Get("lambda_function").([]interface{}) - lambdaConfigs := make([]*s3.LambdaFunctionConfiguration, 0, len(lambdaFunctionNotifications)) - for i, c := range lambdaFunctionNotifications { - lc := &s3.LambdaFunctionConfiguration{} + topicNotifications := d.Get("topic").([]interface{}) + topicConfigs := make([]types.TopicConfiguration, 0, len(topicNotifications)) + for i, c := range topicNotifications { + tc := types.TopicConfiguration{} c := c.(map[string]interface{}) - // Id if val, ok := c["id"].(string); ok && val != "" { - lc.Id = aws.String(val) + tc.Id = aws.String(val) } else { - lc.Id = aws.String(id.PrefixedUniqueId("tf-s3-lambda-")) + tc.Id = aws.String(id.PrefixedUniqueId("tf-s3-topic-")) } - // LambdaFunctionArn - if val, ok := c["lambda_function_arn"].(string); ok { - lc.LambdaFunctionArn = aws.String(val) + if val, ok := c["topic_arn"].(string); ok { + tc.TopicArn = aws.String(val) } - // Events - events := d.Get(fmt.Sprintf("lambda_function.%d.events", i)).(*schema.Set).List() - lc.Events = make([]*string, 0, len(events)) - for _, e := range events { - lc.Events = append(lc.Events, aws.String(e.(string))) - } + tc.Events = flex.ExpandStringyValueSet[types.Event](d.Get(fmt.Sprintf("topic.%d.events", i)).(*schema.Set)) - // Filter - filterRules := make([]*s3.FilterRule, 0, filterRulesSliceStartLen) + filterRules := make([]types.FilterRule, 0, filterRulesSliceStartLen) if val, ok := c["filter_prefix"].(string); ok && val != "" { - filterRule := &s3.FilterRule{ - Name: aws.String("prefix"), + filterRule := types.FilterRule{ + Name: types.FilterRuleNamePrefix, Value: aws.String(val), } filterRules = append(filterRules, filterRule) } if val, ok := c["filter_suffix"].(string); ok && val != "" { - filterRule := &s3.FilterRule{ - Name: aws.String("suffix"), + filterRule := types.FilterRule{ + Name: types.FilterRuleNameSuffix, Value: aws.String(val), } filterRules = append(filterRules, filterRule) } if len(filterRules) > 0 { - lc.Filter = &s3.NotificationConfigurationFilter{ - Key: &s3.KeyFilter{ + tc.Filter = &types.NotificationConfigurationFilter{ + Key: &types.S3KeyFilter{ FilterRules: filterRules, }, } } - lambdaConfigs = append(lambdaConfigs, lc) + topicConfigs = append(topicConfigs, tc) } - notificationConfiguration := &s3.NotificationConfiguration{} + notificationConfiguration := &types.NotificationConfiguration{} if eventbridgeConfig != nil { notificationConfiguration.EventBridgeConfiguration = eventbridgeConfig } @@ -329,123 +295,108 @@ func resourceBucketNotificationPut(ctx context.Context, d *schema.ResourceData, if len(topicConfigs) > 0 { notificationConfiguration.TopicConfigurations = topicConfigs } - i := &s3.PutBucketNotificationConfigurationInput{ + input := &s3.PutBucketNotificationConfigurationInput{ Bucket: aws.String(bucket), NotificationConfiguration: notificationConfiguration, } - log.Printf("[DEBUG] S3 bucket: %s, Putting notification: %v", bucket, i) - err := retry.RetryContext(ctx, s3BucketPropagationTimeout, func() *retry.RetryError { - _, err := conn.PutBucketNotificationConfigurationWithContext(ctx, i) - - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return retry.RetryableError(err) - } - - if err != nil { - return retry.NonRetryableError(err) - } - - return nil - }) - - if tfresource.TimedOut(err) { - _, err = conn.PutBucketNotificationConfigurationWithContext(ctx, i) - } + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return conn.PutBucketNotificationConfiguration(ctx, input) + }, errCodeNoSuchBucket) if err != nil { - return sdkdiag.AppendErrorf(diags, "putting S3 Bucket Notification Configuration: %s", err) + return diag.Errorf("creating S3 Bucket (%s) Notification: %s", bucket, err) } - d.SetId(bucket) + if d.IsNewResource() { + d.SetId(bucket) - return append(diags, resourceBucketNotificationRead(ctx, d, meta)...) -} + _, err = tfresource.RetryWhenNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findBucketNotificationConfiguration(ctx, conn, d.Id(), "") + }) -func resourceBucketNotificationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) - - i := &s3.PutBucketNotificationConfigurationInput{ - Bucket: aws.String(d.Id()), - NotificationConfiguration: &s3.NotificationConfiguration{}, - } - - log.Printf("[DEBUG] S3 bucket: %s, Deleting notification: %v", d.Id(), i) - _, err := conn.PutBucketNotificationConfigurationWithContext(ctx, i) - - if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Notification Configuration (%s): %s", d.Id(), err) + if err != nil { + return diag.Errorf("waiting for S3 Bucket Notification (%s) create: %s", d.Id(), err) + } } - return diags + return append(diags, resourceBucketNotificationRead(ctx, d, meta)...) } func resourceBucketNotificationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) - notificationConfigs, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ - Bucket: aws.String(d.Id()), - }) + output, err := findBucketNotificationConfiguration(ctx, conn, d.Id(), "") - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - log.Printf("[WARN] S3 Bucket Notification Configuration (%s) not found, removing from state", d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] S3 Bucket Notification (%s) not found, removing from state", d.Id()) d.SetId("") return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Notification Configuration (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Notification (%s): %s", d.Id(), err) } - if notificationConfigs == nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Notification Configuration (%s): empty response", d.Id()) + d.Set("bucket", d.Id()) + d.Set("eventbridge", output.EventBridgeConfiguration != nil) + if err := d.Set("lambda_function", flattenLambdaFunctionConfigurations(output.LambdaFunctionConfigurations)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting lambda_function: %s", err) + } + if err := d.Set("queue", flattenQueueConfigurations(output.QueueConfigurations)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting queue: %s", err) + } + if err := d.Set("topic", flattenTopicConfigurations(output.TopicConfigurations)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting queue: %s", err) } - log.Printf("[DEBUG] S3 Bucket: %s, get notification: %v", d.Id(), notificationConfigs) - - d.Set("bucket", d.Id()) + return diags +} - // EventBridge Notification - d.Set("eventbridge", notificationConfigs.EventBridgeConfiguration != nil) +func resourceBucketNotificationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).S3Client(ctx) - // Topic Notification - if err := d.Set("topic", flattenTopicConfigurations(notificationConfigs.TopicConfigurations)); err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 bucket \"%s\" topic notification: %s", d.Id(), err) + input := &s3.PutBucketNotificationConfigurationInput{ + Bucket: aws.String(d.Id()), + NotificationConfiguration: &types.NotificationConfiguration{}, } - // SQS Notification - if err := d.Set("queue", flattenQueueConfigurations(notificationConfigs.QueueConfigurations)); err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 bucket \"%s\" queue notification: %s", d.Id(), err) + log.Printf("[DEBUG] Deleting S3 Bucket Notification: %s", d.Id()) + _, err := conn.PutBucketNotificationConfiguration(ctx, input) + + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) { + return nil } - // Lambda Notification - if err := d.Set("lambda_function", flattenLambdaFunctionConfigurations(notificationConfigs.LambdaFunctionConfigurations)); err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 bucket \"%s\" lambda function notification: %s", d.Id(), err) + if err != nil { + return diag.Errorf("deleting S3 Bucket Notification (%s): %s", d.Id(), err) } + // Don't wait for the notification configuration to disappear as it still exists after update. + return diags } -func flattenNotificationConfigurationFilter(filter *s3.NotificationConfigurationFilter) map[string]interface{} { +func flattenNotificationConfigurationFilter(filter *types.NotificationConfigurationFilter) map[string]interface{} { filterRules := map[string]interface{}{} if filter.Key == nil || filter.Key.FilterRules == nil { return filterRules } for _, f := range filter.Key.FilterRules { - if strings.ToLower(*f.Name) == s3.FilterRuleNamePrefix { - filterRules["filter_prefix"] = aws.StringValue(f.Value) - } - if strings.ToLower(*f.Name) == s3.FilterRuleNameSuffix { - filterRules["filter_suffix"] = aws.StringValue(f.Value) + name := strings.ToLower(string(f.Name)) + if name == string(types.FilterRuleNamePrefix) { + filterRules["filter_prefix"] = aws.ToString(f.Value) + } else if name == string(types.FilterRuleNameSuffix) { + filterRules["filter_suffix"] = aws.ToString(f.Value) } } return filterRules } -func flattenTopicConfigurations(configs []*s3.TopicConfiguration) []map[string]interface{} { +func flattenTopicConfigurations(configs []types.TopicConfiguration) []map[string]interface{} { topicNotifications := make([]map[string]interface{}, 0, len(configs)) for _, notification := range configs { var conf map[string]interface{} @@ -455,16 +406,16 @@ func flattenTopicConfigurations(configs []*s3.TopicConfiguration) []map[string]i conf = map[string]interface{}{} } - conf["id"] = aws.StringValue(notification.Id) - conf["events"] = flex.FlattenStringSet(notification.Events) - conf["topic_arn"] = aws.StringValue(notification.TopicArn) + conf["id"] = aws.ToString(notification.Id) + conf["events"] = notification.Events + conf["topic_arn"] = aws.ToString(notification.TopicArn) topicNotifications = append(topicNotifications, conf) } return topicNotifications } -func flattenQueueConfigurations(configs []*s3.QueueConfiguration) []map[string]interface{} { +func flattenQueueConfigurations(configs []types.QueueConfiguration) []map[string]interface{} { queueNotifications := make([]map[string]interface{}, 0, len(configs)) for _, notification := range configs { var conf map[string]interface{} @@ -474,16 +425,16 @@ func flattenQueueConfigurations(configs []*s3.QueueConfiguration) []map[string]i conf = map[string]interface{}{} } - conf["id"] = aws.StringValue(notification.Id) - conf["events"] = flex.FlattenStringSet(notification.Events) - conf["queue_arn"] = aws.StringValue(notification.QueueArn) + conf["id"] = aws.ToString(notification.Id) + conf["events"] = notification.Events + conf["queue_arn"] = aws.ToString(notification.QueueArn) queueNotifications = append(queueNotifications, conf) } return queueNotifications } -func flattenLambdaFunctionConfigurations(configs []*s3.LambdaFunctionConfiguration) []map[string]interface{} { +func flattenLambdaFunctionConfigurations(configs []types.LambdaFunctionConfiguration) []map[string]interface{} { lambdaFunctionNotifications := make([]map[string]interface{}, 0, len(configs)) for _, notification := range configs { var conf map[string]interface{} @@ -493,11 +444,39 @@ func flattenLambdaFunctionConfigurations(configs []*s3.LambdaFunctionConfigurati conf = map[string]interface{}{} } - conf["id"] = aws.StringValue(notification.Id) - conf["events"] = flex.FlattenStringSet(notification.Events) - conf["lambda_function_arn"] = aws.StringValue(notification.LambdaFunctionArn) + conf["id"] = aws.ToString(notification.Id) + conf["events"] = notification.Events + conf["lambda_function_arn"] = aws.ToString(notification.LambdaFunctionArn) lambdaFunctionNotifications = append(lambdaFunctionNotifications, conf) } return lambdaFunctionNotifications } + +func findBucketNotificationConfiguration(ctx context.Context, conn *s3.Client, bucket, expectedBucketOwner string) (*s3.GetBucketNotificationConfigurationOutput, error) { + input := &s3.GetBucketNotificationConfigurationInput{ + Bucket: aws.String(bucket), + } + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + output, err := conn.GetBucketNotificationConfiguration(ctx, input) + + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} diff --git a/internal/service/s3/bucket_notification_test.go b/internal/service/s3/bucket_notification_test.go index 62c3cf7ceb1..19b90debad2 100644 --- a/internal/service/s3/bucket_notification_test.go +++ b/internal/service/s3/bucket_notification_test.go @@ -6,37 +6,40 @@ package s3_test import ( "context" "fmt" - "reflect" - "sort" "testing" - "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/aws/aws-sdk-go-v2/service/s3" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketNotification_eventbridge(t *testing.T) { ctx := acctest.Context(t) + var v s3.GetBucketNotificationConfigurationOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_notification.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketNotificationDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccBucketNotificationConfig_eventBridge(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketEventBridgeNotification(ctx, "aws_s3_bucket.test")), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "eventbridge", "true"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.#", "0"), + resource.TestCheckResourceAttr(resourceName, "queue.#", "0"), + resource.TestCheckResourceAttr(resourceName, "topic.#", "0"), + ), }, { ResourceName: resourceName, @@ -49,35 +52,27 @@ func TestAccS3BucketNotification_eventbridge(t *testing.T) { func TestAccS3BucketNotification_lambdaFunction(t *testing.T) { ctx := acctest.Context(t) + var v s3.GetBucketNotificationConfigurationOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_notification.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketNotificationDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccBucketNotificationConfig_lambdaFunction(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketLambdaFunctionConfiguration(ctx, "aws_s3_bucket.test", - "notification-lambda", - "aws_lambda_function.test", - []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, - &s3.KeyFilter{ - FilterRules: []*s3.FilterRule{ - { - Name: aws.String("Prefix"), - Value: aws.String("tf-acc-test/"), - }, - { - Name: aws.String("Suffix"), - Value: aws.String(".png"), - }, - }, - }, - ), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "eventbridge", "false"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.#", "1"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.0.events.#", "2"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.0.filter_prefix", "tf-acc-test/"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.0.filter_suffix", ".png"), + resource.TestCheckResourceAttr(resourceName, "queue.#", "0"), + resource.TestCheckResourceAttr(resourceName, "topic.#", "0"), ), }, { @@ -91,24 +86,27 @@ func TestAccS3BucketNotification_lambdaFunction(t *testing.T) { func TestAccS3BucketNotification_LambdaFunctionLambdaFunctionARN_alias(t *testing.T) { ctx := acctest.Context(t) + var v s3.GetBucketNotificationConfigurationOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_notification.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketNotificationDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccBucketNotificationConfig_lambdaFunctionLambdaFunctionARNAlias(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketLambdaFunctionConfiguration(ctx, "aws_s3_bucket.test", - "test", - "aws_lambda_alias.test", - []string{"s3:ObjectCreated:*"}, - nil, - ), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "eventbridge", "false"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.#", "1"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.0.events.#", "1"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.0.filter_prefix", ""), + resource.TestCheckResourceAttr(resourceName, "lambda_function.0.filter_suffix", ""), + resource.TestCheckResourceAttr(resourceName, "queue.#", "0"), + resource.TestCheckResourceAttr(resourceName, "topic.#", "0"), ), }, { @@ -122,35 +120,27 @@ func TestAccS3BucketNotification_LambdaFunctionLambdaFunctionARN_alias(t *testin func TestAccS3BucketNotification_queue(t *testing.T) { ctx := acctest.Context(t) + var v s3.GetBucketNotificationConfigurationOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_notification.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketNotificationDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccBucketNotificationConfig_queue(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketQueueNotification(ctx, "aws_s3_bucket.test", - "notification-sqs", - "aws_sqs_queue.test", - []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, - &s3.KeyFilter{ - FilterRules: []*s3.FilterRule{ - { - Name: aws.String("Prefix"), - Value: aws.String("tf-acc-test/"), - }, - { - Name: aws.String("Suffix"), - Value: aws.String(".mp4"), - }, - }, - }, - ), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "eventbridge", "false"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.#", "0"), + resource.TestCheckResourceAttr(resourceName, "queue.#", "1"), + resource.TestCheckResourceAttr(resourceName, "queue.0.events.#", "2"), + resource.TestCheckResourceAttr(resourceName, "queue.0.filter_prefix", "tf-acc-test/"), + resource.TestCheckResourceAttr(resourceName, "queue.0.filter_suffix", ".mp4"), + resource.TestCheckResourceAttr(resourceName, "topic.#", "0"), ), }, { @@ -164,24 +154,27 @@ func TestAccS3BucketNotification_queue(t *testing.T) { func TestAccS3BucketNotification_topic(t *testing.T) { ctx := acctest.Context(t) + var v s3.GetBucketNotificationConfigurationOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_notification.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketNotificationDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccBucketNotificationConfig_topic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketTopicNotification(ctx, "aws_s3_bucket.test", - "notification-sns1", - "aws_sns_topic.test", - []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, - nil, - ), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "eventbridge", "false"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.#", "0"), + resource.TestCheckResourceAttr(resourceName, "queue.#", "0"), + resource.TestCheckResourceAttr(resourceName, "topic.#", "1"), + resource.TestCheckResourceAttr(resourceName, "topic.0.events.#", "2"), + resource.TestCheckResourceAttr(resourceName, "topic.0.filter_prefix", ""), + resource.TestCheckResourceAttr(resourceName, "topic.0.filter_suffix", ""), ), }, { @@ -195,48 +188,30 @@ func TestAccS3BucketNotification_topic(t *testing.T) { func TestAccS3BucketNotification_Topic_multiple(t *testing.T) { ctx := acctest.Context(t) + var v s3.GetBucketNotificationConfigurationOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_bucket_notification.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketNotificationDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccBucketNotificationConfig_topicMultiple(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketTopicNotification(ctx, "aws_s3_bucket.test", - "notification-sns1", - "aws_sns_topic.test", - []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, - &s3.KeyFilter{ - FilterRules: []*s3.FilterRule{ - { - Name: aws.String("Prefix"), - Value: aws.String("tf-acc-test/"), - }, - { - Name: aws.String("Suffix"), - Value: aws.String(".txt"), - }, - }, - }, - ), - testAccCheckBucketTopicNotification(ctx, "aws_s3_bucket.test", - "notification-sns2", - "aws_sns_topic.test", - []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, - &s3.KeyFilter{ - FilterRules: []*s3.FilterRule{ - { - Name: aws.String("Suffix"), - Value: aws.String(".log"), - }, - }, - }, - ), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "eventbridge", "false"), + resource.TestCheckResourceAttr(resourceName, "lambda_function.#", "0"), + resource.TestCheckResourceAttr(resourceName, "queue.#", "0"), + resource.TestCheckResourceAttr(resourceName, "topic.#", "2"), + resource.TestCheckResourceAttr(resourceName, "topic.0.events.#", "2"), + resource.TestCheckResourceAttr(resourceName, "topic.0.filter_prefix", "tf-acc-test/"), + resource.TestCheckResourceAttr(resourceName, "topic.0.filter_suffix", ".txt"), + resource.TestCheckResourceAttr(resourceName, "topic.1.events.#", "2"), + resource.TestCheckResourceAttr(resourceName, "topic.1.filter_prefix", ""), + resource.TestCheckResourceAttr(resourceName, "topic.1.filter_suffix", ".log"), ), }, { @@ -250,45 +225,26 @@ func TestAccS3BucketNotification_Topic_multiple(t *testing.T) { func TestAccS3BucketNotification_update(t *testing.T) { ctx := acctest.Context(t) + var v s3.GetBucketNotificationConfigurationOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_notification.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketNotificationDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccBucketNotificationConfig_topic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketTopicNotification(ctx, "aws_s3_bucket.test", - "notification-sns1", - "aws_sns_topic.test", - []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, - nil, - ), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), ), }, { Config: testAccBucketNotificationConfig_queue(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketQueueNotification(ctx, "aws_s3_bucket.test", - "notification-sqs", - "aws_sqs_queue.test", - []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, - &s3.KeyFilter{ - FilterRules: []*s3.FilterRule{ - { - Name: aws.String("Prefix"), - Value: aws.String("tf-acc-test/"), - }, - { - Name: aws.String("Suffix"), - Value: aws.String(".mp4"), - }, - }, - }, - ), + testAccCheckBucketNotificationExists(ctx, resourceName, &v), ), }, }, @@ -297,239 +253,48 @@ func TestAccS3BucketNotification_update(t *testing.T) { func testAccCheckBucketNotificationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_notification" { continue } - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ - Bucket: aws.String(rs.Primary.ID), - }) - - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return nil - } - - if err != nil { - return retry.NonRetryableError(err) - } - - if len(out.TopicConfigurations) > 0 { - return retry.RetryableError(fmt.Errorf("TopicConfigurations is exists: %v", out)) - } - if len(out.LambdaFunctionConfigurations) > 0 { - return retry.RetryableError(fmt.Errorf("LambdaFunctionConfigurations is exists: %v", out)) - } - if len(out.QueueConfigurations) > 0 { - return retry.RetryableError(fmt.Errorf("QueueConfigurations is exists: %v", out)) - } - - return nil - }) - - if err != nil { - return err - } - } - return nil - } -} - -func testAccCheckBucketTopicNotification(ctx context.Context, n, i, t string, events []string, filters *s3.KeyFilter) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - topicArn := s.RootModule().Resources[t].Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ - Bucket: aws.String(rs.Primary.ID), - }) - - if err != nil { - return retry.NonRetryableError(fmt.Errorf("GetBucketNotification error: %v", err)) - } - - eventSlice := sort.StringSlice(events) - eventSlice.Sort() - - outputTopics := out.TopicConfigurations - matched := false - for _, outputTopic := range outputTopics { - if *outputTopic.Id == i { - matched = true - - if *outputTopic.TopicArn != topicArn { - return retry.RetryableError(fmt.Errorf("bad topic arn, expected: %s, got %#v", topicArn, *outputTopic.TopicArn)) - } - - if filters != nil { - if !reflect.DeepEqual(filters, outputTopic.Filter.Key) { - return retry.RetryableError(fmt.Errorf("bad notification filters, expected: %#v, got %#v", filters, outputTopic.Filter.Key)) - } - } else { - if outputTopic.Filter != nil { - return retry.RetryableError(fmt.Errorf("bad notification filters, expected: nil, got %#v", outputTopic.Filter)) - } - } - - outputEventSlice := sort.StringSlice(aws.StringValueSlice(outputTopic.Events)) - outputEventSlice.Sort() - if !reflect.DeepEqual(eventSlice, outputEventSlice) { - return retry.RetryableError(fmt.Errorf("bad notification events, expected: %#v, got %#v", events, outputEventSlice)) - } - } - } - - if !matched { - return retry.RetryableError(fmt.Errorf("No match topic configurations: %#v", out)) - } - - return nil - }) - - return err - } -} - -func testAccCheckBucketEventBridgeNotification(ctx context.Context, n string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ - Bucket: aws.String(rs.Primary.ID), - }) + _, err := tfs3.FindBucketNotificationConfiguration(ctx, conn, rs.Primary.ID, "") - if err != nil { - return retry.NonRetryableError(fmt.Errorf("GetBucketNotification error: %v", err)) - } - - if out.EventBridgeConfiguration == nil { - return retry.RetryableError(fmt.Errorf("No EventBridge configuration: %#v", out)) - } else { - return nil + if tfresource.NotFound(err) { + continue } - }) - - return err - } -} - -func testAccCheckBucketQueueNotification(ctx context.Context, n, i, t string, events []string, filters *s3.KeyFilter) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - queueArn := s.RootModule().Resources[t].Primary.Attributes["arn"] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ - Bucket: aws.String(rs.Primary.ID), - }) if err != nil { - return retry.NonRetryableError(fmt.Errorf("GetBucketNotification error: %v", err)) - } - - eventSlice := sort.StringSlice(events) - eventSlice.Sort() - - outputQueues := out.QueueConfigurations - matched := false - for _, outputQueue := range outputQueues { - if *outputQueue.Id == i { - matched = true - - if *outputQueue.QueueArn != queueArn { - return retry.RetryableError(fmt.Errorf("bad queue arn, expected: %s, got %#v", queueArn, *outputQueue.QueueArn)) - } - - if filters != nil { - if !reflect.DeepEqual(filters, outputQueue.Filter.Key) { - return retry.RetryableError(fmt.Errorf("bad notification filters, expected: %#v, got %#v", filters, outputQueue.Filter.Key)) - } - } else { - if outputQueue.Filter != nil { - return retry.RetryableError(fmt.Errorf("bad notification filters, expected: nil, got %#v", outputQueue.Filter)) - } - } - - outputEventSlice := sort.StringSlice(aws.StringValueSlice(outputQueue.Events)) - outputEventSlice.Sort() - if !reflect.DeepEqual(eventSlice, outputEventSlice) { - return retry.RetryableError(fmt.Errorf("bad notification events, expected: %#v, got %#v", events, outputEventSlice)) - } - } - } - - if !matched { - return retry.RetryableError(fmt.Errorf("No match queue configurations: %#v", out)) + return err } - return nil - }) + return fmt.Errorf("S3 Bucket Notification %s still exists", rs.Primary.ID) + } - return err + return nil } } -func testAccCheckBucketLambdaFunctionConfiguration(ctx context.Context, n, i, t string, events []string, filters *s3.KeyFilter) resource.TestCheckFunc { +func testAccCheckBucketNotificationExists(ctx context.Context, n string, v *s3.GetBucketNotificationConfigurationOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - funcArn := s.RootModule().Resources[t].Primary.Attributes["arn"] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ - Bucket: aws.String(rs.Primary.ID), - }) + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } - if err != nil { - return retry.NonRetryableError(fmt.Errorf("GetBucketNotification error: %v", err)) - } + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) - eventSlice := sort.StringSlice(events) - eventSlice.Sort() - - outputFunctions := out.LambdaFunctionConfigurations - matched := false - for _, outputFunc := range outputFunctions { - if *outputFunc.Id == i { - matched = true - - if *outputFunc.LambdaFunctionArn != funcArn { - return retry.RetryableError(fmt.Errorf("bad lambda function arn, expected: %s, got %#v", funcArn, *outputFunc.LambdaFunctionArn)) - } - - if filters != nil { - if !reflect.DeepEqual(filters, outputFunc.Filter.Key) { - return retry.RetryableError(fmt.Errorf("bad notification filters, expected: %#v, got %#v", filters, outputFunc.Filter.Key)) - } - } else { - if outputFunc.Filter != nil { - return retry.RetryableError(fmt.Errorf("bad notification filters, expected: nil, got %#v", outputFunc.Filter)) - } - } - - outputEventSlice := sort.StringSlice(aws.StringValueSlice(outputFunc.Events)) - outputEventSlice.Sort() - if !reflect.DeepEqual(eventSlice, outputEventSlice) { - return retry.RetryableError(fmt.Errorf("bad notification events, expected: %#v, got %#v", events, outputEventSlice)) - } - } - } + output, err := tfs3.FindBucketNotificationConfiguration(ctx, conn, rs.Primary.ID, "") - if !matched { - return retry.RetryableError(fmt.Errorf("No match lambda function configurations: %#v", out)) - } + if err != nil { + return err + } - return nil - }) + *v = *output - return err + return nil } } diff --git a/internal/service/s3/bucket_object.go b/internal/service/s3/bucket_object.go index bc696e299e6..1a6fadf590a 100644 --- a/internal/service/s3/bucket_object.go +++ b/internal/service/s3/bucket_object.go @@ -14,22 +14,19 @@ import ( "fmt" "io" "log" - "net/http" "os" "strings" - "time" - "github.com/YakDriver/regexache" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/aws/aws-sdk-go/service/s3/s3manager" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/feature/s3/manager" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/service/kms" @@ -60,10 +57,10 @@ func ResourceBucketObject() *schema.Resource { Schema: map[string]*schema.Schema{ "acl": { - Type: schema.TypeString, - Default: s3.ObjectCannedACLPrivate, - Optional: true, - ValidateFunc: validation.StringInSlice(s3.ObjectCannedACL_Values(), false), + Type: schema.TypeString, + Default: types.ObjectCannedACLPrivate, + Optional: true, + ValidateDiagFunc: enum.Validate[types.ObjectCannedACL](), }, "bucket": { Deprecated: "Use the aws_s3_object resource instead", @@ -136,7 +133,7 @@ func ResourceBucketObject() *schema.Resource { ValidateFunc: verify.ValidARN, DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { // ignore diffs where the user hasn't specified a kms_key_id but the bucket has a default KMS key configured - if new == "" && d.Get("server_side_encryption") == s3.ServerSideEncryptionAwsKms { + if new == "" && d.Get("server_side_encryption") == types.ServerSideEncryptionAwsKms { return true } return false @@ -149,14 +146,14 @@ func ResourceBucketObject() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, }, "object_lock_legal_hold_status": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice(s3.ObjectLockLegalHoldStatus_Values(), false), + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.ObjectLockLegalHoldStatus](), }, "object_lock_mode": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice(s3.ObjectLockMode_Values(), false), + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.ObjectLockMode](), }, "object_lock_retain_until_date": { Type: schema.TypeString, @@ -164,10 +161,10 @@ func ResourceBucketObject() *schema.Resource { ValidateFunc: validation.IsRFC3339Time, }, "server_side_encryption": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice(s3.ServerSideEncryption_Values(), false), - Computed: true, + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.ServerSideEncryption](), + Computed: true, }, "source": { Type: schema.TypeString, @@ -179,10 +176,10 @@ func ResourceBucketObject() *schema.Resource { Optional: true, }, "storage_class": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ValidateFunc: validation.StringInSlice(s3.ObjectStorageClass_Values(), false), + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateDiagFunc: enum.Validate[types.ObjectStorageClass](), }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), @@ -206,17 +203,12 @@ func resourceBucketObjectCreate(ctx context.Context, d *schema.ResourceData, met } func resourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - const ( - objectCreationTimeout = 2 * time.Minute - ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) - key := d.Get("key").(string) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, objectCreationTimeout, func() (interface{}, error) { - return FindObjectByThreePartKeyV1(ctx, conn, bucket, key, "") - }, d.IsNewResource()) + key := sdkv1CompatibleCleanKey(d.Get("key").(string)) + output, err := findObjectByBucketAndKey(ctx, conn, bucket, key, "", "") if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] S3 Object (%s) not found, removing from state", d.Id()) @@ -228,59 +220,36 @@ func resourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "reading S3 Object (%s): %s", d.Id(), err) } - output := outputRaw.(*s3.HeadObjectOutput) - d.Set("bucket_key_enabled", output.BucketKeyEnabled) d.Set("cache_control", output.CacheControl) d.Set("content_disposition", output.ContentDisposition) d.Set("content_encoding", output.ContentEncoding) d.Set("content_language", output.ContentLanguage) d.Set("content_type", output.ContentType) - metadata := flex.PointersMapToStringList(output.Metadata) - - // AWS Go SDK capitalizes metadata, this is a workaround. https://github.com/aws/aws-sdk-go/issues/445 - for k, v := range metadata { - delete(metadata, k) - metadata[strings.ToLower(k)] = v - } - - if err := d.Set("metadata", metadata); err != nil { - return sdkdiag.AppendErrorf(diags, "setting metadata: %s", err) - } - d.Set("version_id", output.VersionId) - d.Set("server_side_encryption", output.ServerSideEncryption) - d.Set("website_redirect", output.WebsiteRedirectLocation) + // See https://forums.aws.amazon.com/thread.jspa?threadID=44003 + d.Set("etag", strings.Trim(aws.ToString(output.ETag), `"`)) + d.Set("metadata", output.Metadata) d.Set("object_lock_legal_hold_status", output.ObjectLockLegalHoldStatus) d.Set("object_lock_mode", output.ObjectLockMode) d.Set("object_lock_retain_until_date", flattenObjectDate(output.ObjectLockRetainUntilDate)) - - if err := resourceBucketObjectSetKMS(ctx, d, meta, output.SSEKMSKeyId); err != nil { - return sdkdiag.AppendErrorf(diags, "object KMS: %s", err) - } - - // See https://forums.aws.amazon.com/thread.jspa?threadID=44003 - d.Set("etag", strings.Trim(aws.StringValue(output.ETag), `"`)) - + d.Set("server_side_encryption", output.ServerSideEncryption) // The "STANDARD" (which is also the default) storage // class when set would not be included in the results. - d.Set("storage_class", s3.StorageClassStandard) - if output.StorageClass != nil { // nosemgrep: ci.helper-schema-ResourceData-Set-extraneous-nil-check + d.Set("storage_class", types.ObjectStorageClassStandard) + if output.StorageClass != "" { d.Set("storage_class", output.StorageClass) } + d.Set("version_id", output.VersionId) + d.Set("website_redirect", output.WebsiteRedirectLocation) - // Retry due to S3 eventual consistency - tagsRaw, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 2*time.Minute, func() (interface{}, error) { - return ObjectListTagsV1(ctx, conn, bucket, key) - }, s3.ErrCodeNoSuchBucket) - - if err != nil { - return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s) Object (%s): %s", bucket, key, err) + if err := resourceBucketObjectSetKMS(ctx, d, meta, output.SSEKMSKeyId); err != nil { + return sdkdiag.AppendFromErr(diags, err) } - tags, ok := tagsRaw.(tftags.KeyValueTags) + tags, err := ObjectListTags(ctx, conn, bucket, key) - if !ok { - return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s) Object (%s): unable to convert tags", bucket, key) + if err != nil { + return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s) Object (%s): %s", bucket, key, err) } setTagsOut(ctx, Tags(tags)) @@ -294,41 +263,47 @@ func resourceBucketObjectUpdate(ctx context.Context, d *schema.ResourceData, met return append(diags, resourceBucketObjectUpload(ctx, d, meta)...) } - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) - key := d.Get("key").(string) + key := sdkv1CompatibleCleanKey(d.Get("key").(string)) if d.HasChange("acl") { - _, err := conn.PutObjectAclWithContext(ctx, &s3.PutObjectAclInput{ + input := &s3.PutObjectAclInput{ + ACL: types.ObjectCannedACL(d.Get("acl").(string)), Bucket: aws.String(bucket), Key: aws.String(key), - ACL: aws.String(d.Get("acl").(string)), - }) + } + + _, err := conn.PutObjectAcl(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "putting S3 object ACL: %s", err) + return sdkdiag.AppendErrorf(diags, "putting S3 Object (%s) ACL: %s", d.Id(), err) } } if d.HasChange("object_lock_legal_hold_status") { - _, err := conn.PutObjectLegalHoldWithContext(ctx, &s3.PutObjectLegalHoldInput{ + input := &s3.PutObjectLegalHoldInput{ Bucket: aws.String(bucket), Key: aws.String(key), - LegalHold: &s3.ObjectLockLegalHold{ - Status: aws.String(d.Get("object_lock_legal_hold_status").(string)), + LegalHold: &types.ObjectLockLegalHold{ + Status: types.ObjectLockLegalHoldStatus(d.Get("object_lock_legal_hold_status").(string)), }, - }) + } + + _, err := conn.PutObjectLegalHold(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "putting S3 object lock legal hold: %s", err) + return sdkdiag.AppendErrorf(diags, "putting S3 Object (%s) legal hold: %s", d.Id(), err) } } if d.HasChanges("object_lock_mode", "object_lock_retain_until_date") { - req := &s3.PutObjectRetentionInput{ + input := &s3.PutObjectRetentionInput{ Bucket: aws.String(bucket), Key: aws.String(key), - Retention: &s3.ObjectLockRetention{ - Mode: aws.String(d.Get("object_lock_mode").(string)), + Retention: &types.ObjectLockRetention{ + Mode: types.ObjectLockRetentionMode(d.Get("object_lock_mode").(string)), RetainUntilDate: expandObjectDate(d.Get("object_lock_retain_until_date").(string)), }, } @@ -336,24 +311,25 @@ func resourceBucketObjectUpdate(ctx context.Context, d *schema.ResourceData, met // Bypass required to lower or clear retain-until date. if d.HasChange("object_lock_retain_until_date") { oraw, nraw := d.GetChange("object_lock_retain_until_date") - o := expandObjectDate(oraw.(string)) - n := expandObjectDate(nraw.(string)) + o, n := expandObjectDate(oraw.(string)), expandObjectDate(nraw.(string)) + if n == nil || (o != nil && n.Before(*o)) { - req.BypassGovernanceRetention = aws.Bool(true) + input.BypassGovernanceRetention = true } } - _, err := conn.PutObjectRetentionWithContext(ctx, req) + _, err := conn.PutObjectRetention(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "putting S3 object lock retention: %s", err) + return sdkdiag.AppendErrorf(diags, "putting S3 Object (%s) retention: %s", d.Id(), err) } } if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := ObjectUpdateTagsV1(ctx, conn, bucket, key, o, n); err != nil { - return sdkdiag.AppendErrorf(diags, "updating S3 Bucket (%s) Object (%s) tags: %s", bucket, key, err) + if err := ObjectUpdateTags(ctx, conn, bucket, key, o, n); err != nil { + return sdkdiag.AppendErrorf(diags, "updating tags: %s", err) } } @@ -365,11 +341,7 @@ func resourceBucketObjectDelete(ctx context.Context, d *schema.ResourceData, met conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) - key := d.Get("key").(string) - // We are effectively ignoring all leading '/'s in the key name and - // treating multiple '/'s as a single '/' as aws.Config.DisableRestProtocolURICleaning is false - key = strings.TrimLeft(key, "/") - key = regexache.MustCompile(`/+`).ReplaceAllString(key, "/") + key := sdkv1CompatibleCleanKey(d.Get("key").(string)) var err error if _, ok := d.GetOk("version_id"); ok { @@ -406,8 +378,8 @@ func resourceBucketObjectImport(ctx context.Context, d *schema.ResourceData, met func resourceBucketObjectUpload(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) - uploader := s3manager.NewUploaderWithClient(conn) + conn := meta.(*conns.AWSClient).S3Client(ctx) + uploader := manager.NewUploader(conn) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig tags := defaultTagsConfig.MergeTags(tftags.New(ctx, d.Get("tags").(map[string]interface{}))) @@ -447,30 +419,26 @@ func resourceBucketObjectUpload(ctx context.Context, d *schema.ResourceData, met body = bytes.NewReader([]byte{}) } - bucket := d.Get("bucket").(string) - key := d.Get("key").(string) - - input := &s3manager.UploadInput{ - ACL: aws.String(d.Get("acl").(string)), + input := &s3.PutObjectInput{ Body: body, - Bucket: aws.String(bucket), - Key: aws.String(key), + Bucket: aws.String(d.Get("bucket").(string)), + Key: aws.String(sdkv1CompatibleCleanKey(d.Get("key").(string))), } - if v, ok := d.GetOk("storage_class"); ok { - input.StorageClass = aws.String(v.(string)) + if v, ok := d.GetOk("acl"); ok { + input.ACL = types.ObjectCannedACL(v.(string)) } - if v, ok := d.GetOk("cache_control"); ok { - input.CacheControl = aws.String(v.(string)) + if v, ok := d.GetOk("bucket_key_enabled"); ok { + input.BucketKeyEnabled = v.(bool) } - if v, ok := d.GetOk("content_type"); ok { - input.ContentType = aws.String(v.(string)) + if v, ok := d.GetOk("cache_control"); ok { + input.CacheControl = aws.String(v.(string)) } - if v, ok := d.GetOk("metadata"); ok { - input.Metadata = flex.ExpandStringMap(v.(map[string]interface{})) + if v, ok := d.GetOk("content_disposition"); ok { + input.ContentDisposition = aws.String(v.(string)) } if v, ok := d.GetOk("content_encoding"); ok { @@ -481,21 +449,37 @@ func resourceBucketObjectUpload(ctx context.Context, d *schema.ResourceData, met input.ContentLanguage = aws.String(v.(string)) } - if v, ok := d.GetOk("content_disposition"); ok { - input.ContentDisposition = aws.String(v.(string)) + if v, ok := d.GetOk("content_type"); ok { + input.ContentType = aws.String(v.(string)) } - if v, ok := d.GetOk("bucket_key_enabled"); ok { - input.BucketKeyEnabled = aws.Bool(v.(bool)) + if v, ok := d.GetOk("kms_key_id"); ok { + input.SSEKMSKeyId = aws.String(v.(string)) + input.ServerSideEncryption = types.ServerSideEncryptionAwsKms + } + + if v, ok := d.GetOk("metadata"); ok { + input.Metadata = flex.ExpandStringValueMap(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("object_lock_legal_hold_status"); ok { + input.ObjectLockLegalHoldStatus = types.ObjectLockLegalHoldStatus(v.(string)) + } + + if v, ok := d.GetOk("object_lock_mode"); ok { + input.ObjectLockMode = types.ObjectLockMode(v.(string)) + } + + if v, ok := d.GetOk("object_lock_retain_until_date"); ok { + input.ObjectLockRetainUntilDate = expandObjectDate(v.(string)) } if v, ok := d.GetOk("server_side_encryption"); ok { - input.ServerSideEncryption = aws.String(v.(string)) + input.ServerSideEncryption = types.ServerSideEncryption(v.(string)) } - if v, ok := d.GetOk("kms_key_id"); ok { - input.SSEKMSKeyId = aws.String(v.(string)) - input.ServerSideEncryption = aws.String(s3.ServerSideEncryptionAwsKms) + if v, ok := d.GetOk("storage_class"); ok { + input.StorageClass = types.StorageClass(v.(string)) } if len(tags) > 0 { @@ -507,24 +491,20 @@ func resourceBucketObjectUpload(ctx context.Context, d *schema.ResourceData, met input.WebsiteRedirectLocation = aws.String(v.(string)) } - if v, ok := d.GetOk("object_lock_legal_hold_status"); ok { - input.ObjectLockLegalHoldStatus = aws.String(v.(string)) + if (input.ObjectLockLegalHoldStatus != "" || input.ObjectLockMode != "" || input.ObjectLockRetainUntilDate != nil) && input.ChecksumAlgorithm == "" { + // "Content-MD5 OR x-amz-checksum- HTTP header is required for Put Object requests with Object Lock parameters". + // AWS SDK for Go v1 transparently added a Content-MD4 header. + input.ChecksumAlgorithm = types.ChecksumAlgorithmCrc32 } - if v, ok := d.GetOk("object_lock_mode"); ok { - input.ObjectLockMode = aws.String(v.(string)) + if _, err := uploader.Upload(ctx, input); err != nil { + return sdkdiag.AppendErrorf(diags, "uploading S3 Object (%s) to Bucket (%s): %s", aws.ToString(input.Key), aws.ToString(input.Bucket), err) } - if v, ok := d.GetOk("object_lock_retain_until_date"); ok { - input.ObjectLockRetainUntilDate = expandObjectDate(v.(string)) - } - - if _, err := uploader.Upload(input); err != nil { - return sdkdiag.AppendErrorf(diags, "uploading object to S3 bucket (%s): %s", bucket, err) + if d.IsNewResource() { + d.SetId(d.Get("key").(string)) } - d.SetId(key) - return append(diags, resourceBucketObjectRead(ctx, d, meta)...) } @@ -538,8 +518,8 @@ func resourceBucketObjectSetKMS(ctx context.Context, d *schema.ResourceData, met return fmt.Errorf("Failed to describe default S3 KMS key (%s): %s", DefaultKMSKeyAlias, err) } - if aws.StringValue(sseKMSKeyId) != aws.StringValue(keyMetadata.Arn) { - log.Printf("[DEBUG] S3 object is encrypted using a non-default KMS Key ID: %s", aws.StringValue(sseKMSKeyId)) + if kmsKeyID := aws.ToString(sseKMSKeyId); kmsKeyID != aws.ToString(keyMetadata.Arn) { + log.Printf("[DEBUG] S3 object is encrypted using a non-default KMS Key ID: %s", kmsKeyID) d.Set("kms_key_id", sseKMSKeyId) } } @@ -585,32 +565,3 @@ func hasBucketObjectContentChanges(d verify.ResourceDiffer) bool { } return false } - -func FindObjectByThreePartKeyV1(ctx context.Context, conn *s3.S3, bucket, key, etag string) (*s3.HeadObjectOutput, error) { - input := &s3.HeadObjectInput{ - Bucket: aws.String(bucket), - Key: aws.String(key), - } - if etag != "" { - input.IfMatch = aws.String(etag) - } - - output, err := conn.HeadObjectWithContext(ctx, input) - - if tfawserr.ErrStatusCodeEquals(err, http.StatusNotFound) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil { - return nil, tfresource.NewEmptyResultError(input) - } - - return output, nil -} diff --git a/internal/service/s3/bucket_object_data_source.go b/internal/service/s3/bucket_object_data_source.go index f66bd8f782a..a81a16531c6 100644 --- a/internal/service/s3/bucket_object_data_source.go +++ b/internal/service/s3/bucket_object_data_source.go @@ -10,18 +10,16 @@ package s3 import ( "bytes" "context" - "fmt" - "log" "strings" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) @@ -121,6 +119,7 @@ func DataSourceBucketObject() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "tags": tftags.TagsSchemaComputed(), "version_id": { Type: schema.TypeString, Optional: true, @@ -130,8 +129,6 @@ func DataSourceBucketObject() *schema.Resource { Type: schema.TypeString, Computed: true, }, - - "tags": tftags.TagsSchemaComputed(), }, DeprecationMessage: `use the aws_s3_object data source instead`, @@ -140,13 +137,12 @@ func DataSourceBucketObject() *schema.Resource { func dataSourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig bucket := d.Get("bucket").(string) - key := d.Get("key").(string) - - input := s3.HeadObjectInput{ + key := sdkv1CompatibleCleanKey(d.Get("key").(string)) + input := &s3.HeadObjectInput{ Bucket: aws.String(bucket), Key: aws.String(key), } @@ -157,25 +153,21 @@ func dataSourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, met input.VersionId = aws.String(v.(string)) } - versionText := "" - uniqueId := bucket + "/" + key - if v, ok := d.GetOk("version_id"); ok { - versionText = fmt.Sprintf(" of version %q", v.(string)) - uniqueId += "@" + v.(string) - } + out, err := findObject(ctx, conn, input) - log.Printf("[DEBUG] Reading S3 Object: %s", input) - out, err := conn.HeadObjectWithContext(ctx, &input) if err != nil { - return sdkdiag.AppendErrorf(diags, "getting S3 Bucket (%s) Object (%s): %s", bucket, key, err) - } - if aws.BoolValue(out.DeleteMarker) { - return sdkdiag.AppendErrorf(diags, "Requested S3 object %q%s has been deleted", bucket+key, versionText) + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket (%s) Object (%s): %s", bucket, key, err) } - log.Printf("[DEBUG] Received S3 object: %s", out) + if out.DeleteMarker { + return sdkdiag.AppendErrorf(diags, "S3 Bucket (%s) Object (%s) has been deleted", bucket, key) + } - d.SetId(uniqueId) + id := bucket + "/" + d.Get("key").(string) + if v, ok := d.GetOk("version_id"); ok { + id += "@" + v.(string) + } + d.SetId(id) d.Set("bucket_key_enabled", out.BucketKeyEnabled) d.Set("cache_control", out.CacheControl) @@ -185,65 +177,58 @@ func dataSourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, met d.Set("content_length", out.ContentLength) d.Set("content_type", out.ContentType) // See https://forums.aws.amazon.com/thread.jspa?threadID=44003 - d.Set("etag", strings.Trim(aws.StringValue(out.ETag), `"`)) + d.Set("etag", strings.Trim(aws.ToString(out.ETag), `"`)) d.Set("expiration", out.Expiration) - d.Set("expires", out.Expires) + if out.Expires != nil { + d.Set("expires", out.Expires.Format(time.RFC1123)) + } else { + d.Set("expires", nil) + } if out.LastModified != nil { d.Set("last_modified", out.LastModified.Format(time.RFC1123)) } else { d.Set("last_modified", "") } - d.Set("metadata", flex.PointersMapToStringList(out.Metadata)) + d.Set("metadata", out.Metadata) d.Set("object_lock_legal_hold_status", out.ObjectLockLegalHoldStatus) d.Set("object_lock_mode", out.ObjectLockMode) d.Set("object_lock_retain_until_date", flattenObjectDate(out.ObjectLockRetainUntilDate)) d.Set("server_side_encryption", out.ServerSideEncryption) d.Set("sse_kms_key_id", out.SSEKMSKeyId) - d.Set("version_id", out.VersionId) - d.Set("website_redirect_location", out.WebsiteRedirectLocation) - // The "STANDARD" (which is also the default) storage // class when set would not be included in the results. - d.Set("storage_class", s3.StorageClassStandard) - if out.StorageClass != nil { // nosemgrep: ci.helper-schema-ResourceData-Set-extraneous-nil-check + d.Set("storage_class", types.ObjectStorageClassStandard) + if out.StorageClass != "" { d.Set("storage_class", out.StorageClass) } + d.Set("version_id", out.VersionId) + d.Set("website_redirect_location", out.WebsiteRedirectLocation) if isContentTypeAllowed(out.ContentType) { - input := s3.GetObjectInput{ - Bucket: aws.String(bucket), - Key: aws.String(key), + input := &s3.GetObjectInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + VersionId: out.VersionId, } if v, ok := d.GetOk("range"); ok { input.Range = aws.String(v.(string)) } - if out.VersionId != nil { - input.VersionId = out.VersionId - } - out, err := conn.GetObjectWithContext(ctx, &input) + + out, err := conn.GetObject(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failed getting S3 object: %s", err) + return sdkdiag.AppendErrorf(diags, "downloading S3 Bucket (%s) Object (%s): %s", bucket, key, err) } buf := new(bytes.Buffer) - bytesRead, err := buf.ReadFrom(out.Body) - if err != nil { - return sdkdiag.AppendErrorf(diags, "Failed reading content of S3 object (%s): %s", uniqueId, err) - } - log.Printf("[INFO] Saving %d bytes from S3 object %s", bytesRead, uniqueId) - d.Set("body", buf.String()) - } else { - contentType := "" - if out.ContentType == nil { - contentType = "" - } else { - contentType = aws.StringValue(out.ContentType) + if _, err := buf.ReadFrom(out.Body); err != nil { + return sdkdiag.AppendFromErr(diags, err) } - log.Printf("[INFO] Ignoring body of S3 object %s with Content-Type %q", uniqueId, contentType) + d.Set("body", buf.String()) } - tags, err := ObjectListTagsV1(ctx, conn, bucket, key) + tags, err := ObjectListTags(ctx, conn, bucket, key) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s) Object (%s): %s", bucket, key, err) diff --git a/internal/service/s3/bucket_object_data_source_test.go b/internal/service/s3/bucket_object_data_source_test.go index 348bd9f73cc..a2a9d22d592 100644 --- a/internal/service/s3/bucket_object_data_source_test.go +++ b/internal/service/s3/bucket_object_data_source_test.go @@ -13,10 +13,10 @@ import ( "time" "github.com/YakDriver/regexache" - "github.com/aws/aws-sdk-go/service/s3" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketObjectDataSource_basic(t *testing.T) { @@ -28,7 +28,7 @@ func TestAccS3BucketObjectDataSource_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -59,7 +59,7 @@ func TestAccS3BucketObjectDataSource_basicViaAccessPoint(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -82,7 +82,7 @@ func TestAccS3BucketObjectDataSource_readableBody(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -112,7 +112,7 @@ func TestAccS3BucketObjectDataSource_kmsEncrypted(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -144,7 +144,7 @@ func TestAccS3BucketObjectDataSource_bucketKeyEnabled(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -177,7 +177,7 @@ func TestAccS3BucketObjectDataSource_allParams(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -224,7 +224,7 @@ func TestAccS3BucketObjectDataSource_objectLockLegalHoldOff(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -255,7 +255,7 @@ func TestAccS3BucketObjectDataSource_objectLockLegalHoldOn(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -289,7 +289,7 @@ func TestAccS3BucketObjectDataSource_leadingSlash(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -336,7 +336,7 @@ func TestAccS3BucketObjectDataSource_multipleSlashes(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ diff --git a/internal/service/s3/bucket_object_test.go b/internal/service/s3/bucket_object_test.go index 81ac525f25c..8690e29b2db 100644 --- a/internal/service/s3/bucket_object_test.go +++ b/internal/service/s3/bucket_object_test.go @@ -11,26 +11,22 @@ import ( "context" "encoding/base64" "fmt" - "io" "os" - "reflect" - "sort" "testing" "time" "github.com/YakDriver/regexache" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" - tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketObject_noNameNoKey(t *testing.T) { @@ -40,7 +36,7 @@ func TestAccS3BucketObject_noNameNoKey(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -66,7 +62,7 @@ func TestAccS3BucketObject_empty(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -75,7 +71,7 @@ func TestAccS3BucketObject_empty(t *testing.T) { Config: testAccBucketObjectConfig_empty(rName), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, ""), + testAccCheckObjectBody(&obj, ""), ), }, { @@ -95,12 +91,12 @@ func TestAccS3BucketObject_source(t *testing.T) { resourceName := "aws_s3_bucket_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -108,7 +104,7 @@ func TestAccS3BucketObject_source(t *testing.T) { Config: testAccBucketObjectConfig_source(rName, source), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectBody(&obj, "{anything will do }"), ), }, { @@ -130,7 +126,7 @@ func TestAccS3BucketObject_content(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -139,7 +135,7 @@ func TestAccS3BucketObject_content(t *testing.T) { Config: testAccBucketObjectConfig_content(rName, "some_bucket_content"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "some_bucket_content"), + testAccCheckObjectBody(&obj, "some_bucket_content"), ), }, { @@ -158,12 +154,12 @@ func TestAccS3BucketObject_etagEncryption(t *testing.T) { var obj s3.GetObjectOutput resourceName := "aws_s3_bucket_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -172,7 +168,7 @@ func TestAccS3BucketObject_etagEncryption(t *testing.T) { Config: testAccBucketObjectConfig_etagEncryption(rName, source), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectBody(&obj, "{anything will do }"), resource.TestCheckResourceAttr(resourceName, "etag", "7b006ff4d70f68cc65061acf2f802e6f"), ), }, @@ -195,7 +191,7 @@ func TestAccS3BucketObject_contentBase64(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -204,7 +200,7 @@ func TestAccS3BucketObject_contentBase64(t *testing.T) { Config: testAccBucketObjectConfig_contentBase64(rName, base64.StdEncoding.EncodeToString([]byte("some_bucket_content"))), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "some_bucket_content"), + testAccCheckObjectBody(&obj, "some_bucket_content"), ), }, }, @@ -220,7 +216,7 @@ func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { startingData := "Ebben!" changingData := "Ne andrò lontana" - filename := testAccBucketObjectCreateTempFile(t, startingData) + filename := testAccObjectCreateTempFile(t, startingData) defer os.Remove(filename) rewriteFile := func(*terraform.State) error { @@ -233,7 +229,7 @@ func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -242,7 +238,7 @@ func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { Config: testAccBucketObjectConfig_sourceHashTrigger(rName, filename), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "Ebben!"), + testAccCheckObjectBody(&obj, "Ebben!"), resource.TestCheckResourceAttr(resourceName, "source_hash", "7c7e02a79f28968882bb1426c8f8bfc6"), rewriteFile, ), @@ -253,7 +249,7 @@ func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { Config: testAccBucketObjectConfig_sourceHashTrigger(rName, filename), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &updated_obj), - testAccCheckBucketObjectBody(&updated_obj, "Ne andrò lontana"), + testAccCheckObjectBody(&updated_obj, "Ne andrò lontana"), resource.TestCheckResourceAttr(resourceName, "source_hash", "cffc5e20de2d21764145b1124c9b337b"), ), }, @@ -274,12 +270,12 @@ func TestAccS3BucketObject_withContentCharacteristics(t *testing.T) { resourceName := "aws_s3_bucket_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -287,7 +283,7 @@ func TestAccS3BucketObject_withContentCharacteristics(t *testing.T) { Config: testAccBucketObjectConfig_contentCharacteristics(rName, source), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectBody(&obj, "{anything will do }"), resource.TestCheckResourceAttr(resourceName, "content_type", "binary/octet-stream"), resource.TestCheckResourceAttr(resourceName, "website_redirect", "http://google.com"), ), @@ -298,7 +294,7 @@ func TestAccS3BucketObject_withContentCharacteristics(t *testing.T) { func TestAccS3BucketObject_nonVersioned(t *testing.T) { ctx := acctest.Context(t) - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial object state") + sourceInitial := testAccObjectCreateTempFile(t, "initial object state") defer os.Remove(sourceInitial) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) var originalObj s3.GetObjectOutput @@ -306,7 +302,7 @@ func TestAccS3BucketObject_nonVersioned(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckAssumeRoleARN(t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -314,7 +310,7 @@ func TestAccS3BucketObject_nonVersioned(t *testing.T) { Config: testAccBucketObjectConfig_nonVersioned(rName, sourceInitial), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial object state"), + testAccCheckObjectBody(&originalObj, "initial object state"), resource.TestCheckResourceAttr(resourceName, "version_id", ""), ), }, @@ -335,14 +331,14 @@ func TestAccS3BucketObject_updates(t *testing.T) { resourceName := "aws_s3_bucket_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial object state") + sourceInitial := testAccObjectCreateTempFile(t, "initial object state") defer os.Remove(sourceInitial) - sourceModified := testAccBucketObjectCreateTempFile(t, "modified object") + sourceModified := testAccObjectCreateTempFile(t, "modified object") defer os.Remove(sourceInitial) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -350,7 +346,7 @@ func TestAccS3BucketObject_updates(t *testing.T) { Config: testAccBucketObjectConfig_updateable(rName, false, sourceInitial), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial object state"), + testAccCheckObjectBody(&originalObj, "initial object state"), resource.TestCheckResourceAttr(resourceName, "etag", "647d1d58e1011c743ec67d5e8af87b53"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), @@ -361,7 +357,7 @@ func TestAccS3BucketObject_updates(t *testing.T) { Config: testAccBucketObjectConfig_updateable(rName, false, sourceModified), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, "modified object"), + testAccCheckObjectBody(&modifiedObj, "modified object"), resource.TestCheckResourceAttr(resourceName, "etag", "1c7fd13df1515c2a13ad9eb068931f09"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), @@ -388,7 +384,7 @@ func TestAccS3BucketObject_updateSameFile(t *testing.T) { startingData := "lane 8" changingData := "chicane" - filename := testAccBucketObjectCreateTempFile(t, startingData) + filename := testAccObjectCreateTempFile(t, startingData) defer os.Remove(filename) rewriteFile := func(*terraform.State) error { @@ -401,7 +397,7 @@ func TestAccS3BucketObject_updateSameFile(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -409,7 +405,7 @@ func TestAccS3BucketObject_updateSameFile(t *testing.T) { Config: testAccBucketObjectConfig_updateable(rName, false, filename), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, startingData), + testAccCheckObjectBody(&originalObj, startingData), resource.TestCheckResourceAttr(resourceName, "etag", "aa48b42f36a2652cbee40c30a5df7d25"), rewriteFile, ), @@ -419,7 +415,7 @@ func TestAccS3BucketObject_updateSameFile(t *testing.T) { Config: testAccBucketObjectConfig_updateable(rName, false, filename), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, changingData), + testAccCheckObjectBody(&modifiedObj, changingData), resource.TestCheckResourceAttr(resourceName, "etag", "fafc05f8c4da0266a99154681ab86e8c"), ), }, @@ -433,14 +429,14 @@ func TestAccS3BucketObject_updatesWithVersioning(t *testing.T) { resourceName := "aws_s3_bucket_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial versioned object state") + sourceInitial := testAccObjectCreateTempFile(t, "initial versioned object state") defer os.Remove(sourceInitial) - sourceModified := testAccBucketObjectCreateTempFile(t, "modified versioned object") + sourceModified := testAccObjectCreateTempFile(t, "modified versioned object") defer os.Remove(sourceInitial) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -448,7 +444,7 @@ func TestAccS3BucketObject_updatesWithVersioning(t *testing.T) { Config: testAccBucketObjectConfig_updateable(rName, true, sourceInitial), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial versioned object state"), + testAccCheckObjectBody(&originalObj, "initial versioned object state"), resource.TestCheckResourceAttr(resourceName, "etag", "cee4407fa91906284e2a5e5e03e86b1b"), ), }, @@ -456,9 +452,9 @@ func TestAccS3BucketObject_updatesWithVersioning(t *testing.T) { Config: testAccBucketObjectConfig_updateable(rName, true, sourceModified), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, "modified versioned object"), + testAccCheckObjectBody(&modifiedObj, "modified versioned object"), resource.TestCheckResourceAttr(resourceName, "etag", "00b8c73b1b50e7cc932362c7225b8e29"), - testAccCheckBucketObjectVersionIdDiffers(&modifiedObj, &originalObj), + testAccCheckObjectVersionIDDiffers(&modifiedObj, &originalObj), ), }, { @@ -479,33 +475,33 @@ func TestAccS3BucketObject_updatesWithVersioningViaAccessPoint(t *testing.T) { resourceName := "aws_s3_bucket_object.test" accessPointResourceName := "aws_s3_access_point.test" - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial versioned object state") + sourceInitial := testAccObjectCreateTempFile(t, "initial versioned object state") defer os.Remove(sourceInitial) - sourceModified := testAccBucketObjectCreateTempFile(t, "modified versioned object") + sourceModified := testAccObjectCreateTempFile(t, "modified versioned object") defer os.Remove(sourceInitial) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_updateableViaAccessPoint(rName, s3.BucketVersioningStatusEnabled, sourceInitial), + Config: testAccBucketObjectConfig_updateableViaAccessPoint(rName, string(types.BucketVersioningStatusEnabled), sourceInitial), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial versioned object state"), + testAccCheckObjectBody(&originalObj, "initial versioned object state"), resource.TestCheckResourceAttrPair(resourceName, "bucket", accessPointResourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "etag", "cee4407fa91906284e2a5e5e03e86b1b"), ), }, { - Config: testAccBucketObjectConfig_updateableViaAccessPoint(rName, s3.BucketVersioningStatusEnabled, sourceModified), + Config: testAccBucketObjectConfig_updateableViaAccessPoint(rName, string(types.BucketVersioningStatusEnabled), sourceModified), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, "modified versioned object"), + testAccCheckObjectBody(&modifiedObj, "modified versioned object"), resource.TestCheckResourceAttr(resourceName, "etag", "00b8c73b1b50e7cc932362c7225b8e29"), - testAccCheckBucketObjectVersionIdDiffers(&modifiedObj, &originalObj), + testAccCheckObjectVersionIDDiffers(&modifiedObj, &originalObj), ), }, }, @@ -518,12 +514,12 @@ func TestAccS3BucketObject_kms(t *testing.T) { resourceName := "aws_s3_bucket_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -532,8 +528,8 @@ func TestAccS3BucketObject_kms(t *testing.T) { Config: testAccBucketObjectConfig_kmsID(rName, source), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectSSE(ctx, resourceName, "aws:kms"), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectSSE(ctx, resourceName, "aws:kms"), + testAccCheckObjectBody(&obj, "{anything will do }"), ), }, { @@ -553,12 +549,12 @@ func TestAccS3BucketObject_sse(t *testing.T) { resourceName := "aws_s3_bucket_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -567,8 +563,8 @@ func TestAccS3BucketObject_sse(t *testing.T) { Config: testAccBucketObjectConfig_sse(rName, source), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectSSE(ctx, resourceName, "AES256"), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectSSE(ctx, resourceName, "AES256"), + testAccCheckObjectBody(&obj, "{anything will do }"), ), }, { @@ -590,37 +586,37 @@ func TestAccS3BucketObject_acl(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_acl(rName, "some_bucket_content", s3.BucketCannedACLPrivate, true), + Config: testAccBucketObjectConfig_acl(rName, "some_bucket_content", string(types.BucketCannedACLPrivate), true), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "some_bucket_content"), - resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPrivate), - testAccCheckBucketObjectACL(ctx, resourceName, []string{"FULL_CONTROL"}), + testAccCheckObjectBody(&obj1, "some_bucket_content"), + resource.TestCheckResourceAttr(resourceName, "acl", string(types.BucketCannedACLPrivate)), + testAccCheckObjectACL(ctx, resourceName, []string{"FULL_CONTROL"}), ), }, { - Config: testAccBucketObjectConfig_acl(rName, "some_bucket_content", s3.BucketCannedACLPublicRead, false), + Config: testAccBucketObjectConfig_acl(rName, "some_bucket_content", string(types.BucketCannedACLPublicRead), false), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "some_bucket_content"), - resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPublicRead), - testAccCheckBucketObjectACL(ctx, resourceName, []string{"FULL_CONTROL", "READ"}), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "some_bucket_content"), + resource.TestCheckResourceAttr(resourceName, "acl", string(types.BucketCannedACLPublicRead)), + testAccCheckObjectACL(ctx, resourceName, []string{"FULL_CONTROL", "READ"}), ), }, { - Config: testAccBucketObjectConfig_acl(rName, "changed_some_bucket_content", s3.BucketCannedACLPrivate, true), + Config: testAccBucketObjectConfig_acl(rName, "changed_some_bucket_content", string(types.BucketCannedACLPrivate), true), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdDiffers(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "changed_some_bucket_content"), - resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPrivate), - testAccCheckBucketObjectACL(ctx, resourceName, []string{"FULL_CONTROL"}), + testAccCheckObjectVersionIDDiffers(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "changed_some_bucket_content"), + resource.TestCheckResourceAttr(resourceName, "acl", string(types.BucketCannedACLPrivate)), + testAccCheckObjectACL(ctx, resourceName, []string{"FULL_CONTROL"}), ), }, { @@ -642,7 +638,7 @@ func TestAccS3BucketObject_metadata(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -690,7 +686,7 @@ func TestAccS3BucketObject_storageClass(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -700,7 +696,7 @@ func TestAccS3BucketObject_storageClass(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "storage_class", "STANDARD"), - testAccCheckBucketObjectStorageClass(ctx, resourceName, "STANDARD"), + testAccCheckObjectStorageClass(ctx, resourceName, "STANDARD"), ), }, { @@ -708,7 +704,7 @@ func TestAccS3BucketObject_storageClass(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "storage_class", "REDUCED_REDUNDANCY"), - testAccCheckBucketObjectStorageClass(ctx, resourceName, "REDUCED_REDUNDANCY"), + testAccCheckObjectStorageClass(ctx, resourceName, "REDUCED_REDUNDANCY"), ), }, { @@ -716,7 +712,7 @@ func TestAccS3BucketObject_storageClass(t *testing.T) { Check: resource.ComposeTestCheckFunc( // Can't GetObject on an object in Glacier without restoring it. resource.TestCheckResourceAttr(resourceName, "storage_class", "GLACIER"), - testAccCheckBucketObjectStorageClass(ctx, resourceName, "GLACIER"), + testAccCheckObjectStorageClass(ctx, resourceName, "GLACIER"), ), }, { @@ -724,7 +720,7 @@ func TestAccS3BucketObject_storageClass(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "storage_class", "INTELLIGENT_TIERING"), - testAccCheckBucketObjectStorageClass(ctx, resourceName, "INTELLIGENT_TIERING"), + testAccCheckObjectStorageClass(ctx, resourceName, "INTELLIGENT_TIERING"), ), }, { @@ -732,7 +728,7 @@ func TestAccS3BucketObject_storageClass(t *testing.T) { Check: resource.ComposeTestCheckFunc( // Can't GetObject on an object in DEEP_ARCHIVE without restoring it. resource.TestCheckResourceAttr(resourceName, "storage_class", "DEEP_ARCHIVE"), - testAccCheckBucketObjectStorageClass(ctx, resourceName, "DEEP_ARCHIVE"), + testAccCheckObjectStorageClass(ctx, resourceName, "DEEP_ARCHIVE"), ), }, { @@ -755,7 +751,7 @@ func TestAccS3BucketObject_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -764,7 +760,7 @@ func TestAccS3BucketObject_tags(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -776,8 +772,8 @@ func TestAccS3BucketObject_tags(t *testing.T) { Config: testAccBucketObjectConfig_updatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -790,8 +786,8 @@ func TestAccS3BucketObject_tags(t *testing.T) { Config: testAccBucketObjectConfig_noTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectVersionIDEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -800,8 +796,8 @@ func TestAccS3BucketObject_tags(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectVersionIDDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -828,7 +824,7 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -837,7 +833,7 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -849,8 +845,8 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { Config: testAccBucketObjectConfig_updatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -863,8 +859,8 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { Config: testAccBucketObjectConfig_noTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectVersionIDEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -873,8 +869,8 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectVersionIDDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -901,7 +897,7 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -910,7 +906,7 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -922,8 +918,8 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_updatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -936,8 +932,8 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_noTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectVersionIDEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -946,8 +942,8 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectVersionIDDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -967,7 +963,7 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -976,7 +972,7 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -988,8 +984,8 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_updatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -1002,8 +998,8 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_noTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectVersionIDEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -1012,8 +1008,8 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { Config: testAccBucketObjectConfig_tags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectVersionIDDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -1032,7 +1028,7 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithNone(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1040,7 +1036,7 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithNone(t *testing.T) { Config: testAccBucketObjectConfig_noLockLegalHold(rName, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1050,8 +1046,8 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithNone(t *testing.T) { Config: testAccBucketObjectConfig_lockLegalHold(rName, "stuff", "ON"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "ON"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1062,8 +1058,8 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithNone(t *testing.T) { Config: testAccBucketObjectConfig_lockLegalHold(rName, "changed stuff", "OFF"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdDiffers(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "changed stuff"), + testAccCheckObjectVersionIDDiffers(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "OFF"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1081,7 +1077,7 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithOn(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1089,7 +1085,7 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithOn(t *testing.T) { Config: testAccBucketObjectConfig_lockLegalHold(rName, "stuff", "ON"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "ON"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1099,8 +1095,8 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithOn(t *testing.T) { Config: testAccBucketObjectConfig_lockLegalHold(rName, "stuff", "OFF"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "OFF"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1119,7 +1115,7 @@ func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1127,7 +1123,7 @@ func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { Config: testAccBucketObjectConfig_noLockRetention(rName, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1137,8 +1133,8 @@ func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { Config: testAccBucketObjectConfig_lockRetention(rName, "stuff", retainUntilDate), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate), @@ -1149,8 +1145,8 @@ func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { Config: testAccBucketObjectConfig_noLockRetention(rName, "changed stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdDiffers(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "changed stuff"), + testAccCheckObjectVersionIDDiffers(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1171,7 +1167,7 @@ func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1179,7 +1175,7 @@ func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { Config: testAccBucketObjectConfig_lockRetention(rName, "stuff", retainUntilDate1), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate1), @@ -1189,8 +1185,8 @@ func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { Config: testAccBucketObjectConfig_lockRetention(rName, "stuff", retainUntilDate2), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectVersionIDEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate2), @@ -1200,8 +1196,8 @@ func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { Config: testAccBucketObjectConfig_lockRetention(rName, "stuff", retainUntilDate3), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectVersionIDEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate3), @@ -1211,8 +1207,8 @@ func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { Config: testAccBucketObjectConfig_noLockRetention(rName, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj4), - testAccCheckBucketObjectVersionIdEquals(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "stuff"), + testAccCheckObjectVersionIDEquals(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1230,7 +1226,7 @@ func TestAccS3BucketObject_objectBucketKeyEnabled(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1238,7 +1234,7 @@ func TestAccS3BucketObject_objectBucketKeyEnabled(t *testing.T) { Config: testAccBucketObjectConfig_objectKeyEnabled(rName, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), + testAccCheckObjectBody(&obj, "stuff"), resource.TestCheckResourceAttr(resourceName, "bucket_key_enabled", "true"), ), }, @@ -1254,7 +1250,7 @@ func TestAccS3BucketObject_bucketBucketKeyEnabled(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1262,7 +1258,7 @@ func TestAccS3BucketObject_bucketBucketKeyEnabled(t *testing.T) { Config: testAccBucketObjectConfig_bucketKeyEnabled(rName, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), + testAccCheckObjectBody(&obj, "stuff"), resource.TestCheckResourceAttr(resourceName, "bucket_key_enabled", "true"), ), }, @@ -1278,7 +1274,7 @@ func TestAccS3BucketObject_defaultBucketSSE(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1286,7 +1282,7 @@ func TestAccS3BucketObject_defaultBucketSSE(t *testing.T) { Config: testAccBucketObjectConfig_defaultSSE(rName, "stuff"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectBody(&obj1, "stuff"), ), }, }, @@ -1302,7 +1298,7 @@ func TestAccS3BucketObject_ignoreTags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketObjectDestroy(ctx), Steps: []resource.TestStep{ @@ -1313,10 +1309,10 @@ func TestAccS3BucketObject_ignoreTags(t *testing.T) { testAccBucketObjectConfig_noTags(rName, key, "stuff")), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), - testAccCheckBucketObjectUpdateTagsV1(ctx, resourceName, nil, map[string]string{"ignorekey1": "ignorevalue1"}), + testAccCheckObjectBody(&obj, "stuff"), + testAccCheckObjectUpdateTags(ctx, resourceName, nil, map[string]string{"ignorekey1": "ignorevalue1"}), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), - testAccCheckBucketObjectCheckTags(ctx, resourceName, map[string]string{ + testAccCheckObjectCheckTags(ctx, resourceName, map[string]string{ "ignorekey1": "ignorevalue1", }), ), @@ -1328,12 +1324,12 @@ func TestAccS3BucketObject_ignoreTags(t *testing.T) { testAccBucketObjectConfig_tags(rName, key, "stuff")), Check: resource.ComposeTestCheckFunc( testAccCheckBucketObjectExists(ctx, resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), + testAccCheckObjectBody(&obj, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "CCC"), - testAccCheckBucketObjectCheckTags(ctx, resourceName, map[string]string{ + testAccCheckObjectCheckTags(ctx, resourceName, map[string]string{ "ignorekey1": "ignorevalue1", "Key1": "A@AA", "Key2": "BBB", @@ -1345,50 +1341,16 @@ func TestAccS3BucketObject_ignoreTags(t *testing.T) { }) } -func testAccCheckBucketObjectVersionIdDiffers(first, second *s3.GetObjectOutput) resource.TestCheckFunc { - return func(s *terraform.State) error { - if first.VersionId == nil { - return fmt.Errorf("Expected first object to have VersionId: %s", first) - } - if second.VersionId == nil { - return fmt.Errorf("Expected second object to have VersionId: %s", second) - } - - if *first.VersionId == *second.VersionId { - return fmt.Errorf("Expected Version IDs to differ, but they are equal (%s)", *first.VersionId) - } - - return nil - } -} - -func testAccCheckBucketObjectVersionIdEquals(first, second *s3.GetObjectOutput) resource.TestCheckFunc { - return func(s *terraform.State) error { - if first.VersionId == nil { - return fmt.Errorf("Expected first object to have VersionId: %s", first) - } - if second.VersionId == nil { - return fmt.Errorf("Expected second object to have VersionId: %s", second) - } - - if *first.VersionId != *second.VersionId { - return fmt.Errorf("Expected Version IDs to be equal, but they differ (%s, %s)", *first.VersionId, *second.VersionId) - } - - return nil - } -} - func testAccCheckBucketObjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_object" { continue } - _, err := tfs3.FindObjectByThreePartKeyV1(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], rs.Primary.Attributes["etag"]) + _, err := tfs3.FindObjectByBucketAndKey(ctx, conn, rs.Primary.Attributes["bucket"], tfs3.SDKv1CompatibleCleanKey(rs.Primary.Attributes["key"]), rs.Primary.Attributes["etag"], rs.Primary.Attributes["checksum_algorithm"]) if tfresource.NotFound(err) { continue @@ -1405,192 +1367,28 @@ func testAccCheckBucketObjectDestroy(ctx context.Context) resource.TestCheckFunc } } -func testAccCheckBucketObjectExists(ctx context.Context, n string, obj *s3.GetObjectOutput) resource.TestCheckFunc { +func testAccCheckBucketObjectExists(ctx context.Context, n string, v *s3.GetObjectOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not Found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("No S3 Object ID is set") - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) input := &s3.GetObjectInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), - Key: aws.String(rs.Primary.Attributes["key"]), + Key: aws.String(tfs3.SDKv1CompatibleCleanKey(rs.Primary.Attributes["key"])), IfMatch: aws.String(rs.Primary.Attributes["etag"]), } - var out *s3.GetObjectOutput - - err := retry.RetryContext(ctx, 2*time.Minute, func() *retry.RetryError { - var err error - out, err = conn.GetObjectWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchKey) { - return retry.RetryableError( - fmt.Errorf("getting object %s, retrying: %w", rs.Primary.Attributes["bucket"], err), - ) - } - - if err != nil { - return retry.NonRetryableError(err) - } - - return nil - }) - if tfresource.TimedOut(err) { - out, err = conn.GetObjectWithContext(ctx, input) - } - - if err != nil { - return fmt.Errorf("S3 Object error: %s", err) - } - - *obj = *out - - return nil - } -} - -func testAccCheckBucketObjectBody(obj *s3.GetObjectOutput, want string) resource.TestCheckFunc { - return func(s *terraform.State) error { - body, err := io.ReadAll(obj.Body) - if err != nil { - return fmt.Errorf("failed to read body: %s", err) - } - obj.Body.Close() - - if got := string(body); got != want { - return fmt.Errorf("wrong result body %q; want %q", got, want) - } - - return nil - } -} - -func testAccCheckBucketObjectACL(ctx context.Context, n string, expectedPerms []string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - out, err := conn.GetObjectAclWithContext(ctx, &s3.GetObjectAclInput{ - Bucket: aws.String(rs.Primary.Attributes["bucket"]), - Key: aws.String(rs.Primary.Attributes["key"]), - }) - - if err != nil { - return fmt.Errorf("GetObjectAcl error: %v", err) - } - - var perms []string - for _, v := range out.Grants { - perms = append(perms, *v.Permission) - } - sort.Strings(perms) - - if !reflect.DeepEqual(perms, expectedPerms) { - return fmt.Errorf("Expected ACL permissions to be %v, got %v", expectedPerms, perms) - } - - return nil - } -} - -func testAccCheckBucketObjectStorageClass(ctx context.Context, n, expectedClass string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - out, err := tfs3.FindObjectByThreePartKeyV1(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], "") - - if err != nil { - return err - } - - // The "STANDARD" (which is also the default) storage - // class when set would not be included in the results. - storageClass := s3.StorageClassStandard - if out.StorageClass != nil { - storageClass = *out.StorageClass - } - - if storageClass != expectedClass { - return fmt.Errorf("Expected Storage Class to be %v, got %v", - expectedClass, storageClass) - } - - return nil - } -} - -func testAccCheckBucketObjectSSE(ctx context.Context, n, expectedSSE string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - out, err := tfs3.FindObjectByThreePartKeyV1(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], "") + output, err := conn.GetObject(ctx, input) if err != nil { return err } - if out.ServerSideEncryption == nil { - return fmt.Errorf("Expected a non %v Server Side Encryption.", out.ServerSideEncryption) - } - - sse := *out.ServerSideEncryption - if sse != expectedSSE { - return fmt.Errorf("Expected Server Side Encryption %v, got %v.", - expectedSSE, sse) - } - - return nil - } -} - -func testAccBucketObjectCreateTempFile(t *testing.T, data string) string { - tmpFile, err := os.CreateTemp("", "tf-acc-s3-obj") - if err != nil { - t.Fatal(err) - } - filename := tmpFile.Name() - - err = os.WriteFile(filename, []byte(data), 0644) - if err != nil { - os.Remove(filename) - t.Fatal(err) - } - - return filename -} - -func testAccCheckBucketObjectUpdateTagsV1(ctx context.Context, n string, oldTags, newTags map[string]string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - return tfs3.ObjectUpdateTagsV1(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], oldTags, newTags) - } -} - -func testAccCheckBucketObjectCheckTags(ctx context.Context, n string, expectedTags map[string]string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - got, err := tfs3.ObjectListTagsV1(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"]) - if err != nil { - return err - } - - want := tftags.New(ctx, expectedTags) - if !reflect.DeepEqual(want, got) { - return fmt.Errorf("Incorrect tags, want: %v got: %v", want, got) - } + *v = *output return nil } diff --git a/internal/service/s3/bucket_objects_data_source.go b/internal/service/s3/bucket_objects_data_source.go index ffa982ca3bf..052d6377e32 100644 --- a/internal/service/s3/bucket_objects_data_source.go +++ b/internal/service/s3/bucket_objects_data_source.go @@ -10,8 +10,9 @@ package s3 import ( "context" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -29,9 +30,10 @@ func DataSourceBucketObjects() *schema.Resource { Type: schema.TypeString, Required: true, }, - "prefix": { - Type: schema.TypeString, - Optional: true, + "common_prefixes": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "delimiter": { Type: schema.TypeString, @@ -41,15 +43,6 @@ func DataSourceBucketObjects() *schema.Resource { Type: schema.TypeString, Optional: true, }, - "max_keys": { - Type: schema.TypeInt, - Optional: true, - Default: 1000, - }, - "start_after": { - Type: schema.TypeString, - Optional: true, - }, "fetch_owner": { Type: schema.TypeBool, Optional: true, @@ -59,16 +52,24 @@ func DataSourceBucketObjects() *schema.Resource { Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "common_prefixes": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + "max_keys": { + Type: schema.TypeInt, + Optional: true, + Default: 1000, }, "owners": { Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + "prefix": { + Type: schema.TypeString, + Optional: true, + }, + "start_after": { + Type: schema.TypeString, + Optional: true, + }, }, DeprecationMessage: `use the aws_s3_objects data source instead`, @@ -77,25 +78,19 @@ func DataSourceBucketObjects() *schema.Resource { func dataSourceBucketObjectsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) - prefix := d.Get("prefix").(string) - - listInput := s3.ListObjectsV2Input{ + input := &s3.ListObjectsV2Input{ Bucket: aws.String(bucket), } - if prefix != "" { - listInput.Prefix = aws.String(prefix) - } - if s, ok := d.GetOk("delimiter"); ok { - listInput.Delimiter = aws.String(s.(string)) + input.Delimiter = aws.String(s.(string)) } if s, ok := d.GetOk("encoding_type"); ok { - listInput.EncodingType = aws.String(s.(string)) + input.EncodingType = types.EncodingType(s.(string)) } // "listInput.MaxKeys" refers to max keys returned in a single request @@ -103,60 +98,56 @@ func dataSourceBucketObjectsRead(ctx context.Context, d *schema.ResourceData, me // through the results. "maxKeys" does refer to total keys returned. maxKeys := int64(d.Get("max_keys").(int)) if maxKeys <= keyRequestPageSize { - listInput.MaxKeys = aws.Int64(maxKeys) + input.MaxKeys = int32(maxKeys) + } + + if s, ok := d.GetOk("prefix"); ok { + input.Prefix = aws.String(s.(string)) } if s, ok := d.GetOk("start_after"); ok { - listInput.StartAfter = aws.String(s.(string)) + input.StartAfter = aws.String(s.(string)) } if b, ok := d.GetOk("fetch_owner"); ok { - listInput.FetchOwner = aws.Bool(b.(bool)) + input.FetchOwner = b.(bool) } - var commonPrefixes []string - var keys []string - var owners []string + var nKeys int64 + var commonPrefixes, keys, owners []string + + pages := s3.NewListObjectsV2Paginator(conn, input) +pageLoop: + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "listing S3 Bucket (%s) Objects: %s", bucket, err) + } - err := conn.ListObjectsV2PagesWithContext(ctx, &listInput, func(page *s3.ListObjectsV2Output, lastPage bool) bool { for _, commonPrefix := range page.CommonPrefixes { - commonPrefixes = append(commonPrefixes, aws.StringValue(commonPrefix.Prefix)) + commonPrefixes = append(commonPrefixes, aws.ToString(commonPrefix.Prefix)) } for _, object := range page.Contents { - keys = append(keys, aws.StringValue(object.Key)) + if nKeys >= maxKeys { + break pageLoop + } + + keys = append(keys, aws.ToString(object.Key)) if object.Owner != nil { - owners = append(owners, aws.StringValue(object.Owner.ID)) + owners = append(owners, aws.ToString(object.Owner.ID)) } - } - maxKeys = maxKeys - aws.Int64Value(page.KeyCount) - - if maxKeys <= keyRequestPageSize { - listInput.MaxKeys = aws.Int64(maxKeys) + nKeys++ } - - return !lastPage - }) - - if err != nil { - return sdkdiag.AppendErrorf(diags, "listing S3 Bucket (%s) Objects: %s", bucket, err) } d.SetId(bucket) - - if err := d.Set("common_prefixes", commonPrefixes); err != nil { - return sdkdiag.AppendErrorf(diags, "setting common_prefixes: %s", err) - } - - if err := d.Set("keys", keys); err != nil { - return sdkdiag.AppendErrorf(diags, "setting keys: %s", err) - } - - if err := d.Set("owners", owners); err != nil { - return sdkdiag.AppendErrorf(diags, "setting owners: %s", err) - } + d.Set("common_prefixes", commonPrefixes) + d.Set("keys", keys) + d.Set("owners", owners) return diags } diff --git a/internal/service/s3/bucket_objects_data_source_test.go b/internal/service/s3/bucket_objects_data_source_test.go index 3123c131b12..a7829183132 100644 --- a/internal/service/s3/bucket_objects_data_source_test.go +++ b/internal/service/s3/bucket_objects_data_source_test.go @@ -11,10 +11,10 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/service/s3" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketObjectsDataSource_basic(t *testing.T) { @@ -23,7 +23,7 @@ func TestAccS3BucketObjectsDataSource_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -49,7 +49,7 @@ func TestAccS3BucketObjectsDataSource_basicViaAccessPoint(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -75,7 +75,7 @@ func TestAccS3BucketObjectsDataSource_all(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -106,7 +106,7 @@ func TestAccS3BucketObjectsDataSource_prefixes(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -136,7 +136,7 @@ func TestAccS3BucketObjectsDataSource_encoded(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -162,7 +162,7 @@ func TestAccS3BucketObjectsDataSource_maxKeys(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -188,7 +188,7 @@ func TestAccS3BucketObjectsDataSource_startAfter(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ @@ -213,7 +213,7 @@ func TestAccS3BucketObjectsDataSource_fetchOwner(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ diff --git a/internal/service/s3/bucket_ownership_controls.go b/internal/service/s3/bucket_ownership_controls.go index cde57cfd5b4..43b83a658bd 100644 --- a/internal/service/s3/bucket_ownership_controls.go +++ b/internal/service/s3/bucket_ownership_controls.go @@ -8,13 +8,16 @@ import ( "log" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -46,9 +49,9 @@ func ResourceBucketOwnershipControls() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "object_ownership": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(s3.ObjectOwnership_Values(), false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.ObjectOwnership](), }, }, }, @@ -59,18 +62,17 @@ func ResourceBucketOwnershipControls() *schema.Resource { func resourceBucketOwnershipControlsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) - input := &s3.PutBucketOwnershipControlsInput{ Bucket: aws.String(bucket), - OwnershipControls: &s3.OwnershipControls{ + OwnershipControls: &types.OwnershipControls{ Rules: expandOwnershipControlsRules(d.Get("rule").([]interface{})), }, } - _, err := conn.PutBucketOwnershipControlsWithContext(ctx, input) + _, err := conn.PutBucketOwnershipControls(ctx, input) if err != nil { return sdkdiag.AppendErrorf(diags, "creating S3 Bucket (%s) Ownership Controls: %s", bucket, err) @@ -78,47 +80,36 @@ func resourceBucketOwnershipControlsCreate(ctx context.Context, d *schema.Resour d.SetId(bucket) + _, err = tfresource.RetryWhenNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findOwnershipControls(ctx, conn, d.Id()) + }) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Ownership Controls (%s) create: %s", d.Id(), err) + } + return append(diags, resourceBucketOwnershipControlsRead(ctx, d, meta)...) } func resourceBucketOwnershipControlsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) - - input := &s3.GetBucketOwnershipControlsInput{ - Bucket: aws.String(d.Id()), - } + conn := meta.(*conns.AWSClient).S3Client(ctx) - output, err := conn.GetBucketOwnershipControlsWithContext(ctx, input) + oc, err := findOwnershipControls(ctx, conn, d.Id()) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - log.Printf("[WARN] S3 Bucket Ownership Controls (%s) not found, removing from state", d.Id()) - d.SetId("") - return diags - } - - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, "OwnershipControlsNotFoundError") { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] S3 Bucket Ownership Controls (%s) not found, removing from state", d.Id()) d.SetId("") return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket (%s) Ownership Controls: %s", d.Id(), err) - } - - if output == nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket (%s) Ownership Controls: empty response", d.Id()) + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Ownership Controls (%s): %s", d.Id(), err) } d.Set("bucket", d.Id()) - - if output.OwnershipControls == nil { - d.Set("rule", nil) - } else { - if err := d.Set("rule", flattenOwnershipControlsRules(output.OwnershipControls.Rules)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting rule: %s", err) - } + if err := d.Set("rule", flattenOwnershipControlsRules(oc.Rules)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting rule: %s", err) } return diags @@ -126,19 +117,19 @@ func resourceBucketOwnershipControlsRead(ctx context.Context, d *schema.Resource func resourceBucketOwnershipControlsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) input := &s3.PutBucketOwnershipControlsInput{ Bucket: aws.String(d.Id()), - OwnershipControls: &s3.OwnershipControls{ + OwnershipControls: &types.OwnershipControls{ Rules: expandOwnershipControlsRules(d.Get("rule").([]interface{})), }, } - _, err := conn.PutBucketOwnershipControlsWithContext(ctx, input) + _, err := conn.PutBucketOwnershipControls(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "updating S3 Bucket (%s) Ownership Controls: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating S3 Bucket Ownership Controls (%s): %s", d.Id(), err) } return append(diags, resourceBucketOwnershipControlsRead(ctx, d, meta)...) @@ -146,40 +137,40 @@ func resourceBucketOwnershipControlsUpdate(ctx context.Context, d *schema.Resour func resourceBucketOwnershipControlsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) - input := &s3.DeleteBucketOwnershipControlsInput{ - Bucket: aws.String(d.Id()), - } + log.Printf("[DEBUG] Deleting S3 Bucket Ownership Controls: %s", d.Id()) + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 5*time.Minute, func() (interface{}, error) { + return conn.DeleteBucketOwnershipControls(ctx, &s3.DeleteBucketOwnershipControlsInput{ + Bucket: aws.String(d.Id()), + }) + }, errCodeOperationAborted) - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 5*time.Minute, - func() (any, error) { - return conn.DeleteBucketOwnershipControlsWithContext(ctx, input) - }, - "OperationAborted", - ) - - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeOwnershipControlsNotFoundError) { return diags } - if tfawserr.ErrCodeEquals(err, "OwnershipControlsNotFoundError") { - return diags + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket Ownership Controls (%s): %s", d.Id(), err) } + _, err = tfresource.RetryUntilNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findOwnershipControls(ctx, conn, d.Id()) + }) + if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket (%s) Ownership Controls: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "waiting for S3 Bucket Ownership Controls (%s) delete: %s", d.Id(), err) } return diags } -func expandOwnershipControlsRules(tfList []interface{}) []*s3.OwnershipControlsRule { +func expandOwnershipControlsRules(tfList []interface{}) []types.OwnershipControlsRule { if len(tfList) == 0 || tfList[0] == nil { return nil } - var apiObjects []*s3.OwnershipControlsRule + var apiObjects []types.OwnershipControlsRule for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -194,21 +185,17 @@ func expandOwnershipControlsRules(tfList []interface{}) []*s3.OwnershipControlsR return apiObjects } -func expandOwnershipControlsRule(tfMap map[string]interface{}) *s3.OwnershipControlsRule { - if tfMap == nil { - return nil - } - - apiObject := &s3.OwnershipControlsRule{} +func expandOwnershipControlsRule(tfMap map[string]interface{}) types.OwnershipControlsRule { + apiObject := types.OwnershipControlsRule{} if v, ok := tfMap["object_ownership"].(string); ok && v != "" { - apiObject.ObjectOwnership = aws.String(v) + apiObject.ObjectOwnership = types.ObjectOwnership(v) } return apiObject } -func flattenOwnershipControlsRules(apiObjects []*s3.OwnershipControlsRule) []interface{} { +func flattenOwnershipControlsRules(apiObjects []types.OwnershipControlsRule) []interface{} { if len(apiObjects) == 0 { return nil } @@ -216,26 +203,41 @@ func flattenOwnershipControlsRules(apiObjects []*s3.OwnershipControlsRule) []int var tfList []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { - continue - } - tfList = append(tfList, flattenOwnershipControlsRule(apiObject)) } return tfList } -func flattenOwnershipControlsRule(apiObject *s3.OwnershipControlsRule) map[string]interface{} { - if apiObject == nil { - return nil +func flattenOwnershipControlsRule(apiObject types.OwnershipControlsRule) map[string]interface{} { + tfMap := map[string]interface{}{ + "object_ownership": apiObject.ObjectOwnership, } - tfMap := map[string]interface{}{} + return tfMap +} - if v := apiObject.ObjectOwnership; v != nil { - tfMap["object_ownership"] = aws.StringValue(v) +func findOwnershipControls(ctx context.Context, conn *s3.Client, bucket string) (*types.OwnershipControls, error) { + input := &s3.GetBucketOwnershipControlsInput{ + Bucket: aws.String(bucket), } - return tfMap + output, err := conn.GetBucketOwnershipControls(ctx, input) + + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket, errCodeOwnershipControlsNotFoundError) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.OwnershipControls == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.OwnershipControls, nil } diff --git a/internal/service/s3/bucket_ownership_controls_test.go b/internal/service/s3/bucket_ownership_controls_test.go index 258b2255933..e56e978136e 100644 --- a/internal/service/s3/bucket_ownership_controls_test.go +++ b/internal/service/s3/bucket_ownership_controls_test.go @@ -8,15 +8,15 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/service/s3/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketOwnershipControls_basic(t *testing.T) { @@ -26,17 +26,17 @@ func TestAccS3BucketOwnershipControls_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketOwnershipControlsDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, s3.ObjectOwnershipBucketOwnerPreferred), + Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, string(types.ObjectOwnershipBucketOwnerPreferred)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketOwnershipControlsExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "bucket", rName), resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), - resource.TestCheckResourceAttr(resourceName, "rule.0.object_ownership", s3.ObjectOwnershipBucketOwnerPreferred), + resource.TestCheckResourceAttr(resourceName, "rule.0.object_ownership", string(types.ObjectOwnershipBucketOwnerPreferred)), ), }, { @@ -55,12 +55,12 @@ func TestAccS3BucketOwnershipControls_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketOwnershipControlsDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, s3.ObjectOwnershipBucketOwnerPreferred), + Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, string(types.ObjectOwnershipBucketOwnerPreferred)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketOwnershipControlsExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfs3.ResourceBucketOwnershipControls(), resourceName), @@ -79,12 +79,12 @@ func TestAccS3BucketOwnershipControls_Disappears_bucket(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketOwnershipControlsDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, s3.ObjectOwnershipBucketOwnerPreferred), + Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, string(types.ObjectOwnershipBucketOwnerPreferred)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketOwnershipControlsExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfs3.ResourceBucket(), s3BucketResourceName), @@ -102,17 +102,17 @@ func TestAccS3BucketOwnershipControls_Rule_objectOwnership(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketOwnershipControlsDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, s3.ObjectOwnershipObjectWriter), + Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, string(types.ObjectOwnershipObjectWriter)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketOwnershipControlsExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "bucket", rName), resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), - resource.TestCheckResourceAttr(resourceName, "rule.0.object_ownership", s3.ObjectOwnershipObjectWriter), + resource.TestCheckResourceAttr(resourceName, "rule.0.object_ownership", string(types.ObjectOwnershipObjectWriter)), ), }, { @@ -121,12 +121,12 @@ func TestAccS3BucketOwnershipControls_Rule_objectOwnership(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, s3.ObjectOwnershipBucketOwnerPreferred), + Config: testAccBucketOwnershipControlsConfig_ruleObject(rName, string(types.ObjectOwnershipBucketOwnerPreferred)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketOwnershipControlsExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "bucket", rName), resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), - resource.TestCheckResourceAttr(resourceName, "rule.0.object_ownership", s3.ObjectOwnershipBucketOwnerPreferred), + resource.TestCheckResourceAttr(resourceName, "rule.0.object_ownership", string(types.ObjectOwnershipBucketOwnerPreferred)), ), }, }, @@ -135,24 +135,16 @@ func TestAccS3BucketOwnershipControls_Rule_objectOwnership(t *testing.T) { func testAccCheckBucketOwnershipControlsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_ownership_controls" { continue } - input := &s3.GetBucketOwnershipControlsInput{ - Bucket: aws.String(rs.Primary.ID), - } - - _, err := conn.GetBucketOwnershipControlsWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - continue - } + _, err := tfs3.FindOwnershipControls(ctx, conn, rs.Primary.ID) - if tfawserr.ErrCodeEquals(err, "OwnershipControlsNotFoundError") { + if tfresource.NotFound(err) { continue } @@ -160,31 +152,23 @@ func testAccCheckBucketOwnershipControlsDestroy(ctx context.Context) resource.Te return err } - return fmt.Errorf("S3 Bucket Ownership Controls (%s) still exists", rs.Primary.ID) + return fmt.Errorf("S3 Bucket Ownership Controls %s still exists", rs.Primary.ID) } return nil } } -func testAccCheckBucketOwnershipControlsExists(ctx context.Context, resourceName string) resource.TestCheckFunc { +func testAccCheckBucketOwnershipControlsExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("not found: %s", resourceName) + return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("no resource ID is set") - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - - input := &s3.GetBucketOwnershipControlsInput{ - Bucket: aws.String(rs.Primary.ID), - } + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) - _, err := conn.GetBucketOwnershipControlsWithContext(ctx, input) + _, err := tfs3.FindOwnershipControls(ctx, conn, rs.Primary.ID) return err } diff --git a/internal/service/s3/bucket_request_payment_configuration.go b/internal/service/s3/bucket_request_payment_configuration.go index f978ff54aef..c41120129b9 100644 --- a/internal/service/s3/bucket_request_payment_configuration.go +++ b/internal/service/s3/bucket_request_payment_configuration.go @@ -6,15 +6,17 @@ package s3 import ( "context" "log" - "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/s3" + "github.com/aws/aws-sdk-go-v2/service/s3/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -26,6 +28,7 @@ func ResourceBucketRequestPaymentConfiguration() *schema.Resource { ReadWithoutTimeout: resourceBucketRequestPaymentConfigurationRead, UpdateWithoutTimeout: resourceBucketRequestPaymentConfigurationUpdate, DeleteWithoutTimeout: resourceBucketRequestPaymentConfigurationDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, @@ -44,70 +47,68 @@ func ResourceBucketRequestPaymentConfiguration() *schema.Resource { ValidateFunc: verify.ValidAccountID, }, "payer": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(s3.Payer_Values(), false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.Payer](), }, }, } } func resourceBucketRequestPaymentConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) - input := &s3.PutBucketRequestPaymentInput{ Bucket: aws.String(bucket), - RequestPaymentConfiguration: &s3.RequestPaymentConfiguration{ - Payer: aws.String(d.Get("payer").(string)), + RequestPaymentConfiguration: &types.RequestPaymentConfiguration{ + Payer: types.Payer(d.Get("payer").(string)), }, } - if expectedBucketOwner != "" { input.ExpectedBucketOwner = aws.String(expectedBucketOwner) } - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 2*time.Minute, func() (interface{}, error) { - return conn.PutBucketRequestPaymentWithContext(ctx, input) - }, s3.ErrCodeNoSuchBucket) + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return conn.PutBucketRequestPayment(ctx, input) + }, errCodeNoSuchBucket) if err != nil { - return diag.Errorf("creating S3 bucket (%s) request payment configuration: %s", bucket, err) + return diag.Errorf("creating S3 Bucket (%s) Request Payment Configuration: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) + _, err = tfresource.RetryWhenNotFound(ctx, s3BucketPropagationTimeout, func() (interface{}, error) { + return findBucketRequestPayment(ctx, conn, bucket, expectedBucketOwner) + }) + + if err != nil { + return diag.Errorf("waiting for S3 Bucket Request Payment Configuration (%s) create: %s", d.Id(), err) + } + return resourceBucketRequestPaymentConfigurationRead(ctx, d, meta) } func resourceBucketRequestPaymentConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { return diag.FromErr(err) } - input := &s3.GetBucketRequestPaymentInput{ - Bucket: aws.String(bucket), - } - - if expectedBucketOwner != "" { - input.ExpectedBucketOwner = aws.String(expectedBucketOwner) - } - - output, err := conn.GetBucketRequestPaymentWithContext(ctx, input) + output, err := findBucketRequestPayment(ctx, conn, bucket, expectedBucketOwner) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] S3 Bucket Request Payment Configuration (%s) not found, removing from state", d.Id()) d.SetId("") return nil } - if output == nil { - return diag.Errorf("reading S3 bucket request payment configuration (%s): empty output", d.Id()) + if err != nil { + return diag.Errorf("reading S3 Bucket Request Payment Configuration (%s): %s", d.Id(), err) } d.Set("bucket", bucket) @@ -118,7 +119,7 @@ func resourceBucketRequestPaymentConfigurationRead(ctx context.Context, d *schem } func resourceBucketRequestPaymentConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -127,26 +128,25 @@ func resourceBucketRequestPaymentConfigurationUpdate(ctx context.Context, d *sch input := &s3.PutBucketRequestPaymentInput{ Bucket: aws.String(bucket), - RequestPaymentConfiguration: &s3.RequestPaymentConfiguration{ - Payer: aws.String(d.Get("payer").(string)), + RequestPaymentConfiguration: &types.RequestPaymentConfiguration{ + Payer: types.Payer(d.Get("payer").(string)), }, } - if expectedBucketOwner != "" { input.ExpectedBucketOwner = aws.String(expectedBucketOwner) } - _, err = conn.PutBucketRequestPaymentWithContext(ctx, input) + _, err = conn.PutBucketRequestPayment(ctx, input) if err != nil { - return diag.Errorf("updating S3 bucket request payment configuration (%s): %s", d.Id(), err) + return diag.Errorf("updating S3 Bucket Request Payment Configuration (%s): %s", d.Id(), err) } return resourceBucketRequestPaymentConfigurationRead(ctx, d, meta) } func resourceBucketRequestPaymentConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn(ctx) + conn := meta.(*conns.AWSClient).S3Client(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -155,26 +155,55 @@ func resourceBucketRequestPaymentConfigurationDelete(ctx context.Context, d *sch input := &s3.PutBucketRequestPaymentInput{ Bucket: aws.String(bucket), - RequestPaymentConfiguration: &s3.RequestPaymentConfiguration{ + RequestPaymentConfiguration: &types.RequestPaymentConfiguration{ // To remove a configuration, it is equivalent to disabling // "Requester Pays" in the console; thus, we reset "Payer" back to "BucketOwner" - Payer: aws.String(s3.PayerBucketOwner), + Payer: types.PayerBucketOwner, }, } - if expectedBucketOwner != "" { input.ExpectedBucketOwner = aws.String(expectedBucketOwner) } - _, err = conn.PutBucketRequestPaymentWithContext(ctx, input) + _, err = conn.PutBucketRequestPayment(ctx, input) - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) { return nil } if err != nil { - return diag.Errorf("deleting S3 bucket request payment configuration (%s): %s", d.Id(), err) + return diag.Errorf("deleting S3 Bucket Request Payment Configuration (%s): %s", d.Id(), err) } + // Don't wait for the request payment configuration to disappear as it still exists after reset. + return nil } + +func findBucketRequestPayment(ctx context.Context, conn *s3.Client, bucket, expectedBucketOwner string) (*s3.GetBucketRequestPaymentOutput, error) { + input := &s3.GetBucketRequestPaymentInput{ + Bucket: aws.String(bucket), + } + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + output, err := conn.GetBucketRequestPayment(ctx, input) + + if tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} diff --git a/internal/service/s3/bucket_request_payment_configuration_test.go b/internal/service/s3/bucket_request_payment_configuration_test.go index 4c9e442edf5..5a337de15e2 100644 --- a/internal/service/s3/bucket_request_payment_configuration_test.go +++ b/internal/service/s3/bucket_request_payment_configuration_test.go @@ -8,15 +8,15 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/service/s3/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccS3BucketRequestPaymentConfiguration_Basic_BucketOwner(t *testing.T) { @@ -26,16 +26,16 @@ func TestAccS3BucketRequestPaymentConfiguration_Basic_BucketOwner(t *testing.T) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketRequestPaymentConfigurationDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, s3.PayerBucketOwner), + Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, string(types.PayerBucketOwner)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketRequestPaymentConfigurationExists(ctx, resourceName), resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), - resource.TestCheckResourceAttr(resourceName, "payer", s3.PayerBucketOwner), + resource.TestCheckResourceAttr(resourceName, "payer", string(types.PayerBucketOwner)), ), }, { @@ -54,16 +54,16 @@ func TestAccS3BucketRequestPaymentConfiguration_Basic_Requester(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketRequestPaymentConfigurationDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, s3.PayerRequester), + Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, string(types.PayerRequester)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketRequestPaymentConfigurationExists(ctx, resourceName), resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), - resource.TestCheckResourceAttr(resourceName, "payer", s3.PayerRequester), + resource.TestCheckResourceAttr(resourceName, "payer", string(types.PayerRequester)), ), }, { @@ -82,20 +82,22 @@ func TestAccS3BucketRequestPaymentConfiguration_update(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketRequestPaymentConfigurationDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, s3.PayerRequester), + Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, string(types.PayerRequester)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketRequestPaymentConfigurationExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "payer", string(types.PayerRequester)), ), }, { - Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, s3.PayerBucketOwner), + Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, string(types.PayerBucketOwner)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketRequestPaymentConfigurationExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "payer", string(types.PayerBucketOwner)), ), }, { @@ -104,9 +106,10 @@ func TestAccS3BucketRequestPaymentConfiguration_update(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, s3.PayerRequester), + Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, string(types.PayerRequester)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketRequestPaymentConfigurationExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "payer", string(types.PayerRequester)), ), }, }, @@ -121,22 +124,22 @@ func TestAccS3BucketRequestPaymentConfiguration_migrate_noChange(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketRequestPaymentConfigurationDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketConfig_requestPayer(rName, s3.PayerRequester), + Config: testAccBucketConfig_requestPayer(rName, string(types.PayerRequester)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketExists(ctx, bucketResourceName), - resource.TestCheckResourceAttr(bucketResourceName, "request_payer", s3.PayerRequester), + resource.TestCheckResourceAttr(bucketResourceName, "request_payer", string(types.PayerRequester)), ), }, { - Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, s3.PayerRequester), + Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, string(types.PayerRequester)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketRequestPaymentConfigurationExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "payer", s3.PayerRequester), + resource.TestCheckResourceAttr(resourceName, "payer", string(types.PayerRequester)), ), }, }, @@ -151,22 +154,22 @@ func TestAccS3BucketRequestPaymentConfiguration_migrate_withChange(t *testing.T) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketRequestPaymentConfigurationDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccBucketConfig_requestPayer(rName, s3.PayerRequester), + Config: testAccBucketConfig_requestPayer(rName, string(types.PayerRequester)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketExists(ctx, bucketResourceName), - resource.TestCheckResourceAttr(bucketResourceName, "request_payer", s3.PayerRequester), + resource.TestCheckResourceAttr(bucketResourceName, "request_payer", string(types.PayerRequester)), ), }, { - Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, s3.PayerBucketOwner), + Config: testAccBucketRequestPaymentConfigurationConfig_basic(rName, string(types.PayerBucketOwner)), Check: resource.ComposeTestCheckFunc( testAccCheckBucketRequestPaymentConfigurationExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "payer", s3.PayerBucketOwner), + resource.TestCheckResourceAttr(resourceName, "payer", string(types.PayerBucketOwner)), ), }, }, @@ -175,7 +178,7 @@ func TestAccS3BucketRequestPaymentConfiguration_migrate_withChange(t *testing.T) func testAccCheckBucketRequestPaymentConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_request_payment_configuration" { @@ -187,70 +190,40 @@ func testAccCheckBucketRequestPaymentConfigurationDestroy(ctx context.Context) r return err } - input := &s3.GetBucketRequestPaymentInput{ - Bucket: aws.String(bucket), - } - - if expectedBucketOwner != "" { - input.ExpectedBucketOwner = aws.String(expectedBucketOwner) - } + _, err = tfs3.FindBucketRequestPayment(ctx, conn, bucket, expectedBucketOwner) - output, err := conn.GetBucketRequestPaymentWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + if tfresource.NotFound(err) { continue } if err != nil { - return fmt.Errorf("error getting S3 bucket request payment configuration (%s): %w", rs.Primary.ID, err) + return err } - if output != nil && aws.StringValue(output.Payer) != s3.PayerBucketOwner { - return fmt.Errorf("S3 bucket request payment configuration (%s) still exists", rs.Primary.ID) - } + return fmt.Errorf("S3 Bucket Request Payment Configuration %s still exists", rs.Primary.ID) } return nil } } -func testAccCheckBucketRequestPaymentConfigurationExists(ctx context.Context, resourceName string) resource.TestCheckFunc { +func testAccCheckBucketRequestPaymentConfigurationExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", resourceName) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("Resource (%s) ID not set", resourceName) + return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { return err } - input := &s3.GetBucketRequestPaymentInput{ - Bucket: aws.String(bucket), - } - - if expectedBucketOwner != "" { - input.ExpectedBucketOwner = aws.String(expectedBucketOwner) - } + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Client(ctx) - output, err := conn.GetBucketRequestPaymentWithContext(ctx, input) + _, err = tfs3.FindBucketRequestPayment(ctx, conn, bucket, expectedBucketOwner) - if err != nil { - return fmt.Errorf("error getting S3 bucket request payment configuration (%s): %w", rs.Primary.ID, err) - } - - if output == nil { - return fmt.Errorf("S3 Bucket request payment configuration (%s) not found", rs.Primary.ID) - } - - return nil + return err } } diff --git a/internal/service/s3/bucket_test.go b/internal/service/s3/bucket_test.go index f3998913bd7..309469dcd85 100644 --- a/internal/service/s3/bucket_test.go +++ b/internal/service/s3/bucket_test.go @@ -2638,6 +2638,25 @@ func testAccCheckBucketAddObjectsWithLegalHold(ctx context.Context, n string, ke } } +func testAccCheckBucketAddObjectWithMetadata(ctx context.Context, n string, key string, metadata map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs := s.RootModule().Resources[n] + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) + + _, err := conn.PutObjectWithContext(ctx, &s3.PutObjectInput{ + Bucket: aws.String(rs.Primary.ID), + Key: aws.String(key), + Metadata: aws.StringMap(metadata), + }) + + if err != nil { + return fmt.Errorf("PutObject error: %s", err) + } + + return nil + } +} + // Create an S3 bucket via a CF stack so that it has system tags. func testAccCheckBucketCreateViaCloudFormation(ctx context.Context, n string, stackID *string) resource.TestCheckFunc { return func(s *terraform.State) error { diff --git a/internal/service/s3/enum.go b/internal/service/s3/enum.go index 5c7fa174b2c..0abc9157708 100644 --- a/internal/service/s3/enum.go +++ b/internal/service/s3/enum.go @@ -3,33 +3,9 @@ package s3 -import ( - "github.com/aws/aws-sdk-go/service/s3" -) - const DefaultKMSKeyAlias = "alias/aws/s3" -// These should be defined in the AWS SDK for Go. There is an open issue https://github.com/aws/aws-sdk-go/issues/2683 const ( - BucketCannedACLExecRead = "aws-exec-read" - BucketCannedACLLogDeliveryWrite = "log-delivery-write" - LifecycleRuleStatusEnabled = "Enabled" LifecycleRuleStatusDisabled = "Disabled" ) - -func BucketCannedACL_Values() []string { - result := s3.BucketCannedACL_Values() - result = appendUniqueString(result, BucketCannedACLExecRead) - result = appendUniqueString(result, BucketCannedACLLogDeliveryWrite) - return result -} - -func appendUniqueString(slice []string, elem string) []string { - for _, e := range slice { - if e == elem { - return slice - } - } - return append(slice, elem) -} diff --git a/internal/service/s3/errors.go b/internal/service/s3/errors.go index a497465e9fd..758f1839cbd 100644 --- a/internal/service/s3/errors.go +++ b/internal/service/s3/errors.go @@ -29,6 +29,7 @@ const ( errCodeObjectLockConfigurationNotFound = "ObjectLockConfigurationNotFound" errCodeObjectLockConfigurationNotFoundError = "ObjectLockConfigurationNotFoundError" errCodeOperationAborted = "OperationAborted" + errCodeOwnershipControlsNotFoundError = "OwnershipControlsNotFoundError" ErrCodeReplicationConfigurationNotFound = "ReplicationConfigurationNotFoundError" errCodeServerSideEncryptionConfigurationNotFound = "ServerSideEncryptionConfigurationNotFoundError" errCodeUnsupportedArgument = "UnsupportedArgument" diff --git a/internal/service/s3/exports_test.go b/internal/service/s3/exports_test.go index c4f6d2c74d8..77bc655eb55 100644 --- a/internal/service/s3/exports_test.go +++ b/internal/service/s3/exports_test.go @@ -11,12 +11,19 @@ var ( FindBucket = findBucket FindBucketACL = findBucketACL FindBucketAccelerateConfiguration = findBucketAccelerateConfiguration + FindBucketNotificationConfiguration = findBucketNotificationConfiguration FindBucketPolicy = findBucketPolicy + FindBucketRequestPayment = findBucketRequestPayment FindBucketVersioning = findBucketVersioning FindBucketWebsite = findBucketWebsite FindCORSRules = findCORSRules + FindIntelligentTieringConfiguration = findIntelligentTieringConfiguration + FindInventoryConfiguration = findInventoryConfiguration + FindLoggingEnabled = findLoggingEnabled + FindMetricsConfiguration = findMetricsConfiguration FindObjectByBucketAndKey = findObjectByBucketAndKey FindObjectLockConfiguration = findObjectLockConfiguration + FindOwnershipControls = findOwnershipControls FindServerSideEncryptionConfiguration = findServerSideEncryptionConfiguration SDKv1CompatibleCleanKey = sdkv1CompatibleCleanKey diff --git a/internal/service/s3/object_data_source_test.go b/internal/service/s3/object_data_source_test.go index 53c5c87958b..e690926fa45 100644 --- a/internal/service/s3/object_data_source_test.go +++ b/internal/service/s3/object_data_source_test.go @@ -42,9 +42,11 @@ func TestAccS3ObjectDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexache.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttr(dataSourceName, "metadata.%", "0"), resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "0"), ), }, }, @@ -401,6 +403,65 @@ func TestAccS3ObjectDataSource_checksumMode(t *testing.T) { }) } +func TestAccS3ObjectDataSource_metadata(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_s3_object.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccObjectDataSourceConfig_metadata(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "metadata.%", "2"), + resource.TestCheckResourceAttr(dataSourceName, "metadata.key1", "value1"), + resource.TestCheckResourceAttr(dataSourceName, "metadata.key2", "Value2"), + ), + }, + }, + }) +} + +func TestAccS3ObjectDataSource_metadataUppercaseKey(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + key := fmt.Sprintf("%[1]s-key", rName) + bucketResourceName := "aws_s3_bucket.test" + dataSourceName := "data.aws_s3_object.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.S3EndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccObjectDataSourceConfig_metadataBucketOnly(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketAddObjectWithMetadata(ctx, bucketResourceName, key, map[string]string{ + "key1": "value1", + "Key2": "Value2", + }), + ), + }, + { + Config: testAccObjectDataSourceConfig_metadataBucketAndDS(rName, key), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "metadata.%", "2"), + resource.TestCheckResourceAttr(dataSourceName, "metadata.key1", "value1"), + // https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3#HeadObjectOutput + // Map keys will be normalized to lower-case. + resource.TestCheckResourceAttr(dataSourceName, "metadata.key2", "Value2"), + ), + }, + }, + }) +} + func testAccObjectDataSourceConfig_basic(rName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { @@ -734,3 +795,50 @@ data "aws_s3_object" "test" { } `, rName) } + +func testAccObjectDataSourceConfig_metadata(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_object" "test" { + bucket = aws_s3_bucket.test.bucket + key = "%[1]s-key" + content = "Hello World" + + metadata = { + key1 = "value1" + key2 = "Value2" + } +} + +data "aws_s3_object" "test" { + bucket = aws_s3_bucket.test.bucket + key = aws_s3_object.test.key +} +`, rName) +} + +func testAccObjectDataSourceConfig_metadataBucketOnly(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + force_destroy = true +} +`, rName) +} + +func testAccObjectDataSourceConfig_metadataBucketAndDS(rName, key string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + force_destroy = true +} + +data "aws_s3_object" "test" { + bucket = aws_s3_bucket.test.bucket + key = %[2]q +} +`, rName, key) +} diff --git a/internal/service/s3/tags.go b/internal/service/s3/tags.go index e6e544cb210..690521b27fd 100644 --- a/internal/service/s3/tags.go +++ b/internal/service/s3/tags.go @@ -9,7 +9,6 @@ package s3 import ( "context" "fmt" - "time" aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" s3_sdkv2 "github.com/aws/aws-sdk-go-v2/service/s3" @@ -20,7 +19,6 @@ import ( tfawserr_sdkv1 "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" tfawserr_sdkv2 "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) // Custom S3 tag service update functions using the same format as generated code. @@ -154,69 +152,3 @@ func ObjectUpdateTags(ctx context.Context, conn *s3_sdkv2.Client, bucket, key st return nil } - -// ObjectListTagsV1 lists S3 object tags (AWS SDK for Go v1). -func ObjectListTagsV1(ctx context.Context, conn s3iface_sdkv1.S3API, bucket, key string) (tftags.KeyValueTags, error) { - input := &s3_sdkv1.GetObjectTaggingInput{ - Bucket: aws_sdkv1.String(bucket), - Key: aws_sdkv1.String(key), - } - - outputRaw, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 1*time.Minute, func() (interface{}, error) { - return conn.GetObjectTaggingWithContext(ctx, input) - }, s3_sdkv1.ErrCodeNoSuchKey) - - if tfawserr_sdkv1.ErrCodeEquals(err, errCodeNoSuchTagSet, errCodeNoSuchTagSetError) { - return tftags.New(ctx, nil), nil - } - - if err != nil { - return tftags.New(ctx, nil), err - } - - return KeyValueTags(ctx, outputRaw.(*s3_sdkv1.GetObjectTaggingOutput).TagSet), nil -} - -// ObjectUpdateTagsV1 updates S3 object tags (AWS SDK for Go v1). -func ObjectUpdateTagsV1(ctx context.Context, conn s3iface_sdkv1.S3API, bucket, key string, oldTagsMap, newTagsMap any) error { - oldTags := tftags.New(ctx, oldTagsMap) - newTags := tftags.New(ctx, newTagsMap) - - // We need to also consider any existing ignored tags. - allTags, err := ObjectListTagsV1(ctx, conn, bucket, key) - - if err != nil { - return fmt.Errorf("listing resource tags (%s/%s): %w", bucket, key, err) - } - - ignoredTags := allTags.Ignore(oldTags).Ignore(newTags) - - if len(newTags)+len(ignoredTags) > 0 { - input := &s3_sdkv1.PutObjectTaggingInput{ - Bucket: aws_sdkv1.String(bucket), - Key: aws_sdkv1.String(key), - Tagging: &s3_sdkv1.Tagging{ - TagSet: Tags(newTags.Merge(ignoredTags)), - }, - } - - _, err := conn.PutObjectTaggingWithContext(ctx, input) - - if err != nil { - return fmt.Errorf("setting resource tags (%s/%s): %w", bucket, key, err) - } - } else if len(oldTags) > 0 && len(ignoredTags) == 0 { - input := &s3_sdkv1.DeleteObjectTaggingInput{ - Bucket: aws_sdkv1.String(bucket), - Key: aws_sdkv1.String(key), - } - - _, err := conn.DeleteObjectTaggingWithContext(ctx, input) - - if err != nil { - return fmt.Errorf("deleting resource tags (%s/%s): %w", bucket, key, err) - } - } - - return nil -} diff --git a/internal/service/s3/wait.go b/internal/service/s3/wait.go index 40d03699f35..fa5525cb2e5 100644 --- a/internal/service/s3/wait.go +++ b/internal/service/s3/wait.go @@ -9,7 +9,6 @@ import ( "github.com/aws/aws-sdk-go/service/s3" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) const ( @@ -27,10 +26,6 @@ const ( LifecycleConfigurationRulesStatusNotReady = "NOT_READY" ) -func retryWhenBucketNotFound(ctx context.Context, f func() (interface{}, error)) (interface{}, error) { - return tfresource.RetryWhenAWSErrCodeEquals(ctx, s3BucketPropagationTimeout, f, s3.ErrCodeNoSuchBucket) -} - func waitForLifecycleConfigurationRulesStatus(ctx context.Context, conn *s3.S3, bucket, expectedBucketOwner string, rules []*s3.LifecycleRule) error { stateConf := &retry.StateChangeConf{ Pending: []string{"", LifecycleConfigurationRulesStatusNotReady}, diff --git a/website/docs/d/s3_bucket_object.html.markdown b/website/docs/d/s3_bucket_object.html.markdown index aa4dd601016..4a9c252435c 100644 --- a/website/docs/d/s3_bucket_object.html.markdown +++ b/website/docs/d/s3_bucket_object.html.markdown @@ -79,7 +79,7 @@ This data source exports the following attributes in addition to the arguments a * `expiration` - If the object expiration is configured (see [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the field includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded. * `expires` - Date and time at which the object is no longer cacheable. * `last_modified` - Last modified date of the object in RFC1123 format (e.g., `Mon, 02 Jan 2006 15:04:05 MST`) -* `metadata` - Map of metadata stored with the object in S3 +* `metadata` - Map of metadata stored with the object in S3. [Keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always returned in lowercase. * `object_lock_legal_hold_status` - Indicates whether this object has an active [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds). This field is only returned if you have permission to view an object's legal hold status. * `object_lock_mode` - Object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) currently in place for this object. * `object_lock_retain_until_date` - The date and time when this object's object lock will expire. diff --git a/website/docs/d/s3_object.html.markdown b/website/docs/d/s3_object.html.markdown index e692408ea86..65a8d0df481 100644 --- a/website/docs/d/s3_object.html.markdown +++ b/website/docs/d/s3_object.html.markdown @@ -82,7 +82,7 @@ This data source exports the following attributes in addition to the arguments a * `expiration` - If the object expiration is configured (see [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the field includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded. * `expires` - Date and time at which the object is no longer cacheable. * `last_modified` - Last modified date of the object in RFC1123 format (e.g., `Mon, 02 Jan 2006 15:04:05 MST`) -* `metadata` - Map of metadata stored with the object in S3 +* `metadata` - Map of metadata stored with the object in S3. [Keys](https://developer.hashicorp.com/terraform/language/expressions/types#maps-objects) are always returned in lowercase. * `object_lock_legal_hold_status` - Indicates whether this object has an active [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds). This field is only returned if you have permission to view an object's legal hold status. * `object_lock_mode` - Object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) currently in place for this object. * `object_lock_retain_until_date` - The date and time when this object's object lock will expire. diff --git a/website/docs/r/cloudfront_distribution.html.markdown b/website/docs/r/cloudfront_distribution.html.markdown index 5a6c63c6bff..1a1ad590a72 100644 --- a/website/docs/r/cloudfront_distribution.html.markdown +++ b/website/docs/r/cloudfront_distribution.html.markdown @@ -386,8 +386,8 @@ argument should not be specified. * `origin_access_control_id` (Optional) - Unique identifier of a [CloudFront origin access control][8] for this origin. * `origin_id` (Required) - Unique identifier for the origin. * `origin_path` (Optional) - Optional element that causes CloudFront to request your content from a directory in your Amazon S3 bucket or your custom origin. -* `origin_shield` - The [CloudFront Origin Shield](#origin-shield-arguments) configuration information. Using Origin Shield can help reduce the load on your origin. For more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the Amazon CloudFront Developer Guide. -* `s3_origin_config` - The [CloudFront S3 origin](#s3-origin-config-arguments) configuration information. If a custom origin is required, use `custom_origin_config` instead. +* `origin_shield` - (Optional) [CloudFront Origin Shield](#origin-shield-arguments) configuration information. Using Origin Shield can help reduce the load on your origin. For more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the Amazon CloudFront Developer Guide. +* `s3_origin_config` - (Optional) [CloudFront S3 origin](#s3-origin-config-arguments) configuration information. If a custom origin is required, use `custom_origin_config` instead. ##### Custom Origin Config Arguments @@ -401,7 +401,7 @@ argument should not be specified. ##### Origin Shield Arguments * `enabled` (Required) - Whether Origin Shield is enabled. -* `origin_shield_region` (Required) - AWS Region for Origin Shield. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as us-east-2. +* `origin_shield_region` (Optional) - AWS Region for Origin Shield. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as `us-east-2`. ##### S3 Origin Config Arguments