Search code examples
amazon-web-servicesamazon-s3terraformamazon-cloudfront

terraform null_resource does not trigger S3 bucket update's chained action update CloudFront distribution


Before I used the resource aws_s3_object for syncing local files to S3 bucket. Back then updates in S3 triggered updating the connected distribution in CloudFront.

Now I replaced only the resource aws_s3_object mentioned above by null_resource / provisioner "local-exec" in the project. Then terraform apply only detected changes for the S3 bucket and no longer triggered updates for CloudFront.

What I did wrong / was missing here?

The related code:

Before (CloudFront is updated when S3 is updated):

resource "aws_s3_object" "site" {
  for_each     = fileset("./site/", "*")
  bucket       = xyz.id
  key          = each.value
  source       = "./site/${each.value}"
  etag         = filemd5("./site/${each.value}")
  content_type = "text/html;charset=UTF-8"
}

After (only S3 is updated, CloudFront is not updated):

resource "null_resource" "remove_and_upload_to_s3" {
  triggers = {
    always_run = "${timestamp()}"
  }
  provisioner "local-exec" {
    command = "aws s3 sync ${path.module}/site s3://${aws_s3_bucket.xyz.id}"
  }
}

Solution

  • Since the issue is that the objects are kept the same even after the objects get updated, it seems that the problem with CloudFront is that the cache TTL is not set. There are multiple ways to do this:

    By default, each file automatically expires after 24 hours, but you can change the default behavior in different ways:

    To change the cache duration for all files that match the same path pattern, you can change the CloudFront settings for Minimum TTL, Maximum TTL, and Default TTL for a cache behavior. For information about the individual settings, see Minimum TTL, Maximum TTL, and Default TTL in Values that you specify when you create or update a distribution.

    To change the cache duration for an individual file, you can configure your origin to add a Cache-Control header with the max-age or s-maxage directive, or an Expires header to the file.

    Another option to do this (which can be very expensive depending on the number of objects, so be careful) is to invalidate the CloudFront cache each time you update the objects, using the terraform_data source:

    resource "terraform_data" "remove_and_upload_to_s3" {
      for_each     = aws_s3_object.site
      triggers_replace = [
        each.value.etag
      ]
    
      provisioner "local-exec" {
        command = "aws cloudfront create-invalidation --distribution-id <your CF distribution id> --paths "/example-path/${each.value.key}"
      }
    }
    

    Note that you can also specify multiple paths if needed. Additionally, this will work only if the objects have the new objects have the same name as the old ones.