I have the following advanced log query:
resource.type="container"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_id="mynamespace"
"SOMESTRING"
which when executed fetches the expected results.
I create a custom metric based on this query.
Then I select "Create Alert from Metric" (my-custom-metric
) and try to set up an alert.
When finished an trying to save the alerting policy, I get the following error:
Error 400: Field alert_policy.conditions[0].condition_threshold.filter had an invalid value of “metric.type=“logging.googleapis.com/user/my-custom-metric” resource.type=“container”“: The filter contains unknown resource type: container
How is this even possible?
Stackdriver itself filled in the resource type automatically when I selected
Create Alert from Metric
The reason for this error message is the use of the Legacy Stackdriver in Kubernetes [1].
In the Legacy Kubernetes Stackdriver, GCP have 2 different resources types for Kubernetes;
1- gke_container; used for metrics only
2- container; used for Logs only
In the new version of Stackdriver, GCP only has 1 resource type named “k8s_container” including metrics and the logs. That means using this new version will fix the issue definitively.
The new Stackdriver version is enabled by default on Kubernetes 1.14+ but you can change it manually as mentioned in this documentation [2][3] if you using a different version.
However, as workaround, you can simply delete the red tainted resource type into Stackdriver workspace and add “gke_container” instead. That worked for me.
[1] https://cloud.google.com/monitoring/kubernetes-engine/migration#what-is-changing
[2] https://cloud.google.com/monitoring/kubernetes-engine/installing#migrating
[3] https://cloud.google.com/monitoring/kubernetes-engine/migration#upgrade-timeline