I am currently at a loss on how I am supposed to handle this with terraform.
I have started writing a module to create a database instance for a service (extremly simplified version below)
# main.tf
resource "google_sql_database_instance" "database_instance" {
database_version = "POSTGRES_14"
deletion_protection = false
instance_type = "CLOUD_SQL_INSTANCE"
name = "test-database"
project = var.project_name
region = "europe-west3"
root_password = null
settings {
activation_policy = "ALWAYS"
availability_type = "ZONAL"
collation = null
connector_enforcement = "NOT_REQUIRED"
disk_autoresize = true
disk_autoresize_limit = 0
disk_type = "PD_SSD"
edition = "ENTERPRISE"
pricing_plan = "PER_USE"
tier = "db-custom-1-3840"
time_zone = null
maintenance_window {
day = 1
hour = 0
update_track = "stable"
}
}
timeouts {
create = null
delete = null
update = null
}
}
resource "google_sql_database" "database" {
instance = google_sql_database_instance.database_instance.name
name = "new_database"
project = var.project_name
depends_on = [google_sql_database_instance.database_instance]
}
When terraform apply
ing the first time, the database instance and database are created, with the instance created in europe-west3.
However, I changed my mind and want to have the instance in europe-west1
It's not hard for the instance, I just have to change the parameter and terraform plan
tells me that the instance will be replaced.
However, no mention is made of the database, which is kept unchanged in the state.
But once the database instance is recreated, the database "new_database" does not appear in the "test-database" instance.
When rerunning terraform apply
, it indeed shows that the database needs to be created, and does so without issue.
However, it's a bit annoying having to run terraform apply
twice, is there something I could specify in my main.tf
so that the database is destroyed then recreated in one single step ?
It sounds like there's a hidden¹ constraint between google_sql_database_instance
and google_sql_database
whereby deleting the database instance implicitly deletes the database. Since Terraform doesn't know to expect that situation, it instead understands the database has having been "deleted outside of Terraform" and needs to recover on the next plan/apply round.
Although ideally Terraform would be able to detect situations like this itself, these inter-resource constraints are not currently part of Terraform's model and so it takes a little extra configuration wiring to help Terraform understand what's going on:
resource "google_sql_database" "database" {
instance = google_sql_database_instance.database_instance.name
name = "new_database"
project = var.project_name
lifecycle {
replace_triggered_by = [
google_sql_database_instance.database_instance.region,
]
}
}
This replace_triggered_by
hint tells Terraform that if google_sql_database_instance.database_instance.region
has any pending changes then it should plan to replace google_sql_database.database
.
If there are any other arguments of google_sql_database_instance
that would cause it to get replaced if changed, and if you expect to need to change those arguments in future, you can list those in the replace_triggered_by
argument too.
Although Terraform does allow assigning the whole google_sql_database_instance.database_instance
resource in replace_triggered_by
, I would not recommend that in this case because it means that Terraform will assume it needs to replace the database if anything changes about the database instance, which I expect is too strong a constraint for the situation you're describing.
¹ "hidden" in the sense that Terraform doesn't know about it