I'm trying to run bdcsv.py
:
$ sudo python /opt/bluedata/bundles/bluedata-epic-entdoc-minimal-release-3.7-2207/scripts/monitoring/bdcsv.py \
-c localhost \
-f cred.json \
-s 2018/02/07-00:00:00 \
-e 2018/02/07-23:59:59
I received the error below when using my own start and end values, so for this post I used the start and end values from the example in the BlueData docs.
Running the above returns the following error (I've formatted the json to make it more readable):
processing data for virtual node: bluedata-40 ...
error: {
"error":{
"root_cause":[
{
"type":"parsing_exception",
"reason":"[date_histogram] failed to parse field [time_zone]",
"line":1,
"col":477
}
],
"type":"parsing_exception",
"reason":"[date_histogram] failed to parse field [time_zone]",
"line":1,
"col":477,
"caused_by":{
"type":"illegal_argument_exception",
"reason":"The datetime zone id '00:00' is not recognised"
}
},
"status":400
}
Any idea what is going wrong here?
The same error occurs when running bdusage.py
.
The time zone seems to be passed to the ElasticSearch query in the wrong format. In both scripts, you will find the following lines (I added comments inline for clarification):
tz = time.timezone / 3600 * -1
if tz < 0:
tzstr = str(tz).zfill(3) + ":00" # negative tz will produce strings like "-06:00"
else:
tzstr = str(tz).zfill(2) + ":00" # positiv tz will return e.g. "01:00"
tzstr
will later be included in the query. The error you described appears only if the time difference is >= 0 hours, because ElasticSearch requires the time zone be in a format like +01:00
or -01:00
.
Fix it by replacing the last line in above code like this:
tzstr = "+" + str(tz).zfill(2) + ":00" # positiv tz will now return e.g. "+01:00"