I am trying to add a conditional block to a resource if an optional map variable has key/value pairs. If the map is not null, I want to add the argument as the key, and the value as the value.
Currently this is what i'm trying to do;
variable "aws_attributes" {
type = optional(map(string))
default = {"arg1":"val1", "arg2":"val2"}
}
dynamic "aws_attributes" {
for_each = { for key, value in var.aws_attributes: key => value }
content {
key = value
}
}
The problem I get is: An argument named "key" is not expected here.
which seems to indicate I have an issue with interpolation not happening.
I tried this too:
dynamic "aws_attributes" {
for_each = var.aws_attributes
content {
key = value
}
}
and get
Error: Unsupported argument
on modules/dlt_jobs/main.tf line 37, in resource "databricks_pipeline" "this":
37: key = value
An argument named "key" is not expected here.
If the map is empty the "aws_attributes" block should not be created at all, and if its not empty it should be populated with key/values from the map variable (aws_attributes).
Just for good measure, here is the entire resource block.
resource "databricks_pipeline" "this" {
name = var.dlt_jobs.task_name
storage = var.dlt_jobs.storage_location
dynamic "library" {
for_each = var.dlt_jobs.notebook_libraries
content {
notebook {
path = "${local.user_home}${library.value}"
}
}
}
configuration = {
input = var.dlt_jobs.configuration.input
"pipelines.UI.enablePartialGraphUpdate" = var.dlt_jobs.configuration.pipeline_refresh
}
dynamic "cluster" {
for_each = var.dlt_jobs.cluster != null ? var.dlt_jobs.cluster : []
content {
label = "default"
spark_conf = var.spark_conf
dynamic "aws_attributes" {
for_each = var.aws_attributes
content {
key = value
}
}
autoscale {
min_workers = 1
max_workers = 5
mode = "ENHANCED"
}
custom_tags = {
cluster_type = "job_cluster"
}
}
}
continuous = var.dlt_jobs.continous
photon = var.dlt_jobs.photon
target = var.dlt_jobs.target_database
edition = var.dlt_jobs.edition
channel = var.dlt_jobs.channel
}
Anyone got any ideas please?
In this case i would recommend to just create the code like this, as the number of optional attributes is small.
Terraform provider should ignore values set to null
an should default to the internal provider default in this case. (I do not know the databricks provider well enough)
The code would look like this.
dynamic "aws_attributes" {
for_each = [var.aws_attributes]
content {
zone_id = aws_attributes.value.zone_id
availability = try(aws_attributes.value.availability, null)
first_on_demand = try(aws_attributes.value.first_on_demand, null)
spot_bid_price_percent = try(aws_attributes.value.spot_bid_price_percent, null)
instance_profile_arn = try(aws_attributes.value.instance_profile_arn, null)
ebs_volume_type = try(aws_attributes.value.ebs_volume_type, null)
ebs_volume_count = try(aws_attributes.value.ebs_volume_count, null)
ebs_volume_size = try(aws_attributes.value.ebs_volume_size, null)
}
}
The loop could be improved to also allow not generating the block if it is null
:
for_each = var.aws_attributes != null ? [var.aws_attributes] : []
or
for_each = length(var.aws_attributes) > 0 ? [var.aws_attributes] : []
The same behavior could be achieved using the lookup()
function instead of try()
.
For more dynamic generation of Terraform Code in general, you can also use tooling like Terramate that offers ways to achieve very flexible dynamic generation but follows a different data model to set values:
globals {
aws_attributes = { zone_id = "us-west-2a", availability = "SPOT" }
}
generate_hcl "main.tf" {
content {
# other surrounding code ...
tm_dynamic "aws_attributes" {
# only generate block if there are keys
condition = tm_length(global.aws_attributes) > 0
# dynamically fill the block with all key/value pairs
attributes = global.aws_attributes
}
# other code
}
}