Search code examples
amazon-web-servicesterraformterraform-provider-aws

Why does Terraform want to recreate resources that have not changed?


I have a some Terraform that provisions resources both in AWS and MongoDB Atlas. I can apply the Terraform and it creates the resources as expected, but then if I immediately run another apply, it says it wants to destroy and recreate many of those resources even though they have not changed.

For example:

data "aws_route53_zone" "selected" {
  name = var.domain
}

resource "aws_route53_record" "record" {
  name    = "REDACTED"
  type    = "A"
  zone_id = data.aws_route53_zone.selected.zone_id

  alias {
    evaluate_target_health = "false"
    name                   = var.accelerator_dns_name
    zone_id                = var.accelerator_zone_id
  }
}
# module.REDACTED.data.aws_route53_zone.selected will be read during apply
# (depends on a resource or a module with changes pending)
<= data "aws_route53_zone" "selected" {
      + arn                        = (known after apply)
      + caller_reference           = (known after apply)
      + comment                    = (known after apply)
      + id                         = (known after apply)
      + linked_service_description = (known after apply)
      + linked_service_principal   = (known after apply)
      + name                       = "REDACTED"
      + name_servers               = (known after apply)
      + primary_name_server        = (known after apply)
      + resource_record_set_count  = (known after apply)
      + tags                       = (known after apply)
      + vpc_id                     = (known after apply)
      + zone_id                    = (known after apply)
    }

# module.REDACTED.aws_route53_record.record must be replaced
-/+ resource "aws_route53_record" "record" {
      + allow_overwrite                  = (known after apply)
      ~ fqdn                             = "REDACTED" -> (known after apply)
      ~ id                               = "REDACTED" -> (known after apply)
      - multivalue_answer_routing_policy = false -> null
        name                             = "REDACTED"
      - records                          = [] -> null
      - ttl                              = 0 -> null
      ~ zone_id                          = "REDACTED" # forces replacement -> (known after apply) # forces replacement
        # (1 unchanged attribute hidden)

        # (1 unchanged block hidden)
    }

Terraform claims that the zone_id has changed which is why the aws_route53_record must be recreated but it hasn't actually changed. The value is the same throughout all of the applies.


Solution

  • I figured out my problem, which was pretty insidious because it didn't even have anything to do with the resources described in the question above.

    I am invoking two modules from my Terraform where the outputs of the first are used as inputs for the second. Something was changing within the first module which was then marking many (all?) of the resources in the second module as changed. The thing that was changing in the first module was the container_definitions of some ECS tasks. The trouble was that this long JSON string contained a sensitive value so the entire container_definition was being marked as sensitive thereby making it impossible to see specifically what changed.

    I worked around the sensitive string by temporarily using the nonsensitive() function so terraform plan would show me specifically what changed. This let me see that I wasn't setting certain fields in the container_definition like systemControls. Because it was unspecified, Terraform was attempting to set it to null but the provider was coalescing null into an empty array instead. This caused the state to appear to be different on each apply since null != [].

    Explicitly setting systemControls = [] stopped the ECS task from being recreated on every apply which then caused all of its downstream dependencies from also being recreated.