I'm working on decorator that can be added to python methods that send a metric to GCP Monitoring. The approach is confirmed but the API calls to push the metrics fail if I attempt to send more than 1 observation. The patter is collect metrics and flush after the process finishes to keep it simple for this test. The code to capture the metric inline is here:
def append(self, value):
now = time.time()
seconds = int(now)
nanos = int((now - seconds) * 10 ** 9)
interval = monitoring_v3.TimeInterval(
{"end_time": {"seconds": seconds, "nanos": nanos}}
)
point = monitoring_v3.Point({
"interval": interval,
"value": {"double_value": value}
}
)
self.samples[self.name].append(point)
The code below takes a batch of data points in PerfMetric.samples
dict pointing to arrays of the monitoring_v3.Point
class which was attached in the method append
via a decorator not shown here to call RPC called create_time_series using the MetricServiceClient
class. We point to an array of arrays, so perhaps that's not right or somehow our meta data isn't right in append?
@staticmethod
def flush():
client = monitoring_v3.MetricServiceClient()
for x in PerfMetric.samples:
print('{} has {} points'.format(x, len(PerfMetric.samples[x])))
series = monitoring_v3.TimeSeries()
series.metric.type = 'custom.googleapis.com/perf/{}'.format(x)
series.resource.type = "global"
series.points = PerfMetric.samples[x]
client.create_time_series(request={
"name": PerfMetric.project_name,
"time_series": [series]}
)
Thanks in advance for any suggestions!
I believe this is a documented limitation in the TimeSeries call from the Cloud Monitoring API regarding the points[]
object for its data points:
When creating a time series, this field must contain exactly one point and the point's type must be the same as the value type of the associated metric.