I am currently using AWS ECS for my service deployment. For the shared volumes, I am binding some EFS volumes.
Here is my task definition:
resource "aws_ecs_task_definition" "ecs-fargate" {
family = var.ecs_task_definition_name
container_definitions = var.container_definitions
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = var.ecs_task_cpu
memory = var.ecs_task_memory
execution_role_arn = var.ecs_task_execution_role_arn
task_role_arn = var.ecs_task_role_arn
dynamic "volume" {
for_each = var.volumes
content {
name = volume.value["name"]
efs_volume_configuration {
file_system_id = volume.value["file_system_id"]
}
}
}
}
var "volumes" {
default = [
{
name = "vol1"
file_system_id = "fs-xxxxxxxx"
},
{
name = "vol2"
file_system_id = "fs-xxxxxxxx"
}
]
}
The Terraform code above is working fine as well.
But when I do the terraform apply
every time, the task definition detaches the EFS volume first and re-attach the same again. Here is the screenshot of the issue:
- volume {
- name = "vol1" -> null
- efs_volume_configuration {
- file_system_id = "fs-xxxxxxx" -> null
- root_directory = "/" -> null
}
}
+ volume {
+ name = "vol1"
+ efs_volume_configuration {
+ file_system_id = "fs-xxxxxx"
+ root_directory = "/"
}
}
Am I missing some additional Terraform configuration here for the above issue?
If you apply
your TF config you should see that no changes are actually performed. If you check TF docs for efs_volume_configuration you will see that it has number of attributes. Some of them will be default, such as your root_directory
which you don't specify. TF may need to pick up those default values after your initial apply
. Thus later you may seem them in your subsequent terraform plan
.