Search code examples
terraforminfrastructure-as-code

How to manage locally generated stateful files in Terraform


I have a Terraform (1.0+) script that generates a local config file from a template based on some inputs, e.g:

locals {
  config_tpl = templatefile("${path.module}/config.tpl", {
    foo = "bar"
  })
}

resource "local_file" "config" {
  content  = local._config_tpl
  filename = "${path.module}/config.yaml"
}

This file is then used by a subsequent command run from a local-exec block, which in turn also generates local config files:

resource "null_resource" "my_command" {
  provisioner "local-exec" {
    when        = create
    command     = "../scripts/my_command.sh"
    working_dir = "${path.module}"
  }

  depends_on = [
    local_file.config,
  ]
}

my_command.sh generates infrastructure for which there is no Terraform provider currently available.

All of the generated files should form part of the configuration state, as they are required later during upgrades and ultimately to destroy the environment.

I also would like to run these scripts from a CI/CD pipeline, so naturally you would expect the workspace to be clean on each run, which means the generated files won't be present.

Is there a pattern for managing files such as these? My initial though is to create cloud storage bucket, zip the files up, and store them there before pulling them back down whenever they're needed. However, this feels even more dirty than what is already happening, and it seems like there is the possibility to run into dependency issues.

Or, am I missing something completely different to solve issues such as this?


Solution

  • The problem you've encountered here is what the warning in the hashicorp/local provider's documentation is discussing:

    Terraform primarily deals with remote resources which are able to outlive a single Terraform run, and so local resources can sometimes violate its assumptions. The resources here are best used with care, since depending on local state can make it hard to apply the same Terraform configuration on many different local systems where the local resources may not be universally available. See specific notes in each resource for more information.

    The short and unfortunate answer is that what you are trying to do here is not a problem Terraform is designed to address: its purpose is to manage long-lived objects in remote systems, not artifacts on your local workstation where you are running Terraform.

    In the case of your config.yaml file you may find it a suitable alternative to use a cloud storage object resource type instead of local_file, so that Terraform will just write the file directly to that remote storage and not affect the local system at all. Of course, that will help only if whatever you intend to have read this file is also able to read from the same cloud storage, or if you can write a separate glue script to fetch the object after terraform apply is finished.

    There is no straightforward path to treating the result of a provisioner as persistent data in the state. If you use provisioners then they are always, by definition, one-shot actions taken only during creation of a resource.