Im building the pipeline: sensorData - MQTT broker - Kafka - Cassandra. Payload is transferred as JSON and when saving in Cassandra, date was save as int. I can't get the readable date when query CQLSH.
CREATE TABLE sensordata.mqttsensordata (
sensor text,
temperature float,
humidity int,
timestamp int,
battery int,
calibratedhumidity int,
datetime timestamp,
receiver text,
rssi float,
voltage float,
PRIMARY KEY (
(sensor, temperature, humidity),
timestamp
)
How do I get the readable timestamp when query the database like the picture below?
The only time function that can operate on a UNIX timestamp is [min|max]timeuuid (mintimeuuid()
or maxtimeuuid()
). You can use either of those on the timestamp column, and nest it inside the toTimestamp()
function.
For example, if I have a table of sample times:
> SELECT * FROM sample_times WHERE a=1;
a | b | c | d
---+---------------------------------+--------------------------------------+---------------
1 | 2021-08-08 21:42:54.131000+0000 | 96594031-f891-11eb-b7bc-e12958c8479f | 1628458974131
Column d
is a bigint
where I have stored the UNIX timestamp. I can show that as a timestamp like this:
> SELECT totimestamp(mintimeuuid(d)) FROM sample_times WHERE a=1;
system.totimestamp(system.mintimeuuid(d))
-------------------------------------------
2021-08-08 21:42:54.131000+0000
(1 rows)