I have a pretty standard custom GKE cluster deployed by Terraform. The cluster has following resource was deployed:
resource "kubernetes_manifest" "backendconfig_health_check" {
manifest = {
"apiVersion" = "cloud.google.com/v1"
"kind" = "BackendConfig"
"metadata" = {
"name" = "svc-health-check"
"namespace" = "svc-primary"
}
"spec" = {
"healthCheck" = {
"healthyThreshold" = 1
"port" = 8090
"requestPath" = "/api/v1/health"
"type" = "HTTP"
}
}
}
depends_on = [ google_container_cluster.primary]
}
All went well but after some time upon terraform plan
I started to get the following error:
╷
│ Warning: Attribute not found in schema
│
│ with module.gke.kubernetes_manifest.backendconfig_health_check,
│ on gke/kubernetes_manifest_backendconfig_health_check.tf line 1, in resource "kubernetes_manifest" "backendconfig_health_check":
│ 1: resource "kubernetes_manifest" "backendconfig_health_check" {
│
│ Unable to find schema type for attribute:
│ metadata.clusterName
│
│ (and 6 more similar warnings elsewhere)
╵
Failed generating plan JSON
Exit code: 1
Failed to marshal plan to json: error in marshalResourceDrift: failed to encode refreshed data for module.gke.kubernetes_manifest.backendconfig_health_check as JSON: attribute "object": attribute "metadata": attribute "clusterName" is required
And looks like that clusterName
is not a required metadata attribute. It does not exist.
Terraform has the following providers config:
required_providers {
google = {
source = "hashicorp/google"
version = "4.76.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "4.76.0"
}
# https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.19.0"
}
Also terraaform state show module.gke.kubernetes_manifest.backendconfig_health_check
command returns in object
attribute like below:
...
object = {
apiVersion = "networking.gke.io/v1beta1"
kind = "FrontendConfig"
metadata = {
... ... ...
clusterName = null
... ... ...
}
...
I haven't changed anything in Terraform scripts So I am searching a solution for this.
Any help will be appreciated! Thank you!
After all the solution was to remove invalid Terraform states with terraform state rm __resource__
and then import them with terraform import
.