This can be viewed as a "generic" Terraform question.
I have a module (vm-instances
) used to create my virtual machines. This module has a variable boot_volume_key_id
, which has the id
of the KMS key id if it exists. This key is created in another configuration file if a local variable create_kms_key
is set.
I'm using the newly created KMS key to update the boot_volume
of my exisiting, and in order to use this KMS key for the boot volume a identity policy for managing the boot volume is created. The oci_kms_key
is referenced in a statment in the oci_identity_policy
E.g simplified configuration files:
kms.tf
resource "oci_kms_key" "boot_volume_key"{
count = local.create_kms_key ? 1 : 0
...
}
resource "oci_identity_policy" "boot_volume"{
count = local.create_kms_key ? 1 : 0
...
statement = [
Allow ... where target.key.id = ${oci_kms_key.boot_volume_key.0.id}
]
}
main.tf
module "instances" {
source = "./module/vm-instances"
...
boot_volume = local.create_kms_key ? oci_kms_key.boot_volume_key.0.id : null
}
Problem: I get a 404-NotAuthorizedOrNotFound UpdateVolumeKMS after first terraform apply
, however this works after second apply. I believe this is because the identity policy takes some time to be "functioning".
How can I avoid this problem? I've looked at options like depends_on
and lifecycle
-metablock ignore_changes
.
depends_on
is if I depend my entire vm-instances
module on the identity policy, when I run terraform plan
it seems like my instances are gonna get recreated, something I do not want.lifecycle
's ignore_changes
is that my virtual machines won't get updated when I introduce the KMS key into the configuration file.time_sleep
slows down Terraform configuration unecessary.Looks like oci_identity_policy
can take up to 10seconds:
New policies take effect typically within 10 seconds.
So indeed even if you set an explicit dependency with depends_on
or have an implicit one by just using attributes from oci_identity_policy.boot_volume
you would still need to wait for that duration.
I suggest either filing an issue in OCI provider's repository so they implement this "readiness" check in the provider, or using workarounds like time_sleep
:
resource "oci_identity_policy" "boot_volume"{
count = local.create_kms_key ? 1 : 0
...
}
resource "time_sleep" "wait_for_identity_policy" {
depends_on = [oci_identity_policy.boot_volume]
create_duration = "15s"
}
resource "SOMETHING" "next" {
depends_on = [time_sleep.wait_for_identity_policy]
}