I'm having an issue with terraform/terragrunt deploying into multiple accounts. I want to manage DNS and KMS in my "master" account and deploy everything else into my dev/uat/prod environments.
I've configured multiple AWS accounts using the providers. One is auto-created through terragrunt and the other one is creating within my main.tf file.
I've tried many different ways to get this to work. Aliasing both providers, setting just one provider with an alias and removing terragrunt completely from the equation. In every case, the terraform is being applied to my "master" account for all resources..
Below is an example of my code:
backend = "s3"
# generate = {
# path = "backend.tf"
# if_exists = "overwrite"
# }
config = {
bucket = "arm-terraform-state"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "arm-terraform-state-lock"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "aws" {
alias = "main"
region = "${local.aws_region}"
profile = "${local.account}"
}
EOF
}
locals {
region_vars = read_terragrunt_config(find_in_parent_folders("region.hcl"))
account_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))
aws_region = local.region_vars.locals.aws_region
account = local.account_vars.locals.aws_profile
}
Above is my Terragrunt code. This is my module:
resource "aws_iam_role_policy" "logging_role_policy" {
provider = aws.main
name = format("aws-sftp-logging-policy-%s-%s",var.product_name,var.env)
role= aws_iam_role.logging_role.id
policy = data.aws_iam_policy_document.sftp_logging.json
}
############
# Route 53 #
############
resource "aws_route53_record" "sftp_record" {
provider = aws.master
zone_id = data.aws_route53_zone.facteus.zone_id
name = format("%s-%s",var.product_name,var.env)
type = "CNAME"
ttl = "30"
records = [aws_transfer_server.aws_transfer_service.endpoint]
}
You should start by creating a directory structure that mirrors your desired account structure. Based on your question, it sounds like you might want something like this:
├── dev
│ ├── terragrunt.hcl
│ └── us-east-1
├── master
│ ├── _global
│ │ └── dns
│ ├── terragrunt.hcl
│ └── us-east-1
│ └── kms
├── prod
│ ├── terragrunt.hcl
│ └── us-east-1
└── uat
├── terragrunt.hcl
└── us-east-1
The master
account here has two directories that the others do not have:
_global/dns
- since Route 53 in AWS is a global entity (not regional), you don't want to nest it under us-east-1
. It only lives in the master
account since you stated that you want to control DNS from master. (I'd also recommend naming it route53
rather than dns
, but I digress.)
us-east-1/kms
- This contains the KMS configuration, also only for master
.
Now, in master/terragrunt.hcl
, set up your remote_state
configuration:
remote_state {
backend = "s3"
config = {
encrypt = true
bucket = "master-terraform-state" # Just for example - must be a globally unique
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
}
}
Optionally, you can also include an iam_role
attribute:
iam_role = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
Note that this is optional because you can also simply execute Terragrunt with the credentials for the master account, or you can use the --terragrunt-iam-role
CLI option, or you can set the TERRAGRUNT_IAM_ROLE
env var. In all cases, you'll be executing terragrunt with the role that has permissions for the master account.
Now, for dev/uat/prod accounts, you can have a similar remote_state
configuration, substituting a different bucket name for each of them. You can then use the IAM role appropriate to each of these accounts, either by defining it as iam_role
in the terragrunt.hcl
within each account, or by the other methods I mentioned.
TL;DR You don't need to generate the provider for this, and you don't need to name the profile in the provider configuration. You simply execute terragrunt with the correct IAM role for each account. Terragrunt will assume the role before invoking Terraform. Put the provider configuration in your Terraform module, leaving the profile off.