Search code examples
google-cloud-platformgoogle-cloud-monitoring

GCP Monitoring - Incident does not contain system labels


I've created an alert policy via the GCP console.This policy sends incidents to a PubSub notification channel.

For example: a high CPU utilization policy for containers

{
  "name": "...",
  "displayName": "...",
  "documentation": {},
  "conditions": [
    {
      "name": "...",
      "displayName": "Kubernetes Container - CPU usage time",
      "conditionThreshold": {
        "aggregations": [
          {
            "alignmentPeriod": "300s",
            "perSeriesAligner": "ALIGN_RATE"
          }
        ],
        "comparison": "COMPARISON_GT",
        "duration": "0s",
        "filter": "metric.type=\"kubernetes.io/container/cpu/core_usage_time\" resource.type=\"k8s_container\"",
        "thresholdValue": 0.04,
        "trigger": {
          "count": 1
        }
      }
    }
  ],
  "alertStrategy": {
    "autoClose": "604800s",
    "notificationPrompts": [
      "OPENED"
    ]
  },
  "combiner": "OR",
  "enabled": true,
  "notificationChannels": [
    "..."
  ],
  "creationRecord": {
    "mutateTime": "...",
    "mutatedBy": "..."
  },
  "mutationRecord": {
    "mutateTime": "...",
    "mutatedBy": "..."
  }
}

Once I trigger this alert, and get the incident on the PubSub side, the metadata field for system_labels is always empty:

    "metadata": {
      "system_labels": {},
      "user_labels": {}
    },

Even though, if I use the metrics explorer to view this resource I see that these labels are populated.

Any suggestions?


Solution

  • It's not a bug. Values for metadata.* variables are available only if the labels are explicitly included in a condition's filter or grouping for cross-series aggregation. That is, you must refer to the metadata label in either filter or grouping for it to have a value for the template. For more information refer to the documentation.