I'm getting data from a sensor in the form of UDP packets. The manual provides the form for these packets, but I'm having trouble understanding how to interpret the timestamp. It's described as being an unsigned long that is 4 bytes in length (e.g. 'a07245ba') that is supposed to be interpreted as 20-bits integer and 12-bits fraction. I'm also confused by the info "modulo 20 bits" that is included.
How do I go about interpreting these timestamps correctly?
I've tried simply interpreting the number in two parts using Python's "int('str', 16)" function ( e.g. int('a0724',16) and int('5ba',16) ) and then combining the two parts with a decimal ( e.g. '657188.1466 seconds' ). This seems to give me the proper units for the timestamp (seconds), as I've recorded ~10 seconds of data and the first and last timestamp are 10 seconds apart. However, the fraction part of the number seems incorrect. When I plot the data, the timestamp will jump forwards and back unexpectedly, which leads me to believe I'm interpreting the timestamp incorrectly.
Additionally, the timestamp I have interpreted is not relative to any expected values. The manual says it should either be returned as seconds since power on, or seconds since 1/1/2010. When checked, neither of these seems to be the case.
So the timestamp jumps unexpectedly up to 726162.71 seconds and then back down to 726126.125 seconds. The first four bytes are the timestamp:
datasample = np.array(['b1491fda 00001017 00040a88 00000000 0a 02 00c24d18 0076dd10 fd13fe3c 0032d8ce 0222c71a 01f0f0fa',
'b1492010 00001018 00040a88 00000000 0a 02 00c249aa 0076dbee fd148e86 0032dc34 02235336 01f0f3c8',
'b1492047 00001019 00040a88 00000000 0a 02 00c2463c 0076dacc fd151ed0 0032df9a 0223df52 01f0f696',
'b149207d 0000101a 00040a88 00000000 0a 02 00c248d0 0076da0a fd13fff4 003265b8 02239a24 01f0f3e0',
'b14920b4 0000101b 00040a88 00000000 0a 02 00c248d0 0076da0a fd13fff4 003265b8 02239a24 01f0f3e0',
'b14920eb 0000101c 00040a88 00000000 0a 02 00c1eed0 0076a812 fd148d00 0032b896 022396fe 01f0b4ac'],
dtype='|S98')
timesample = np.array([726161.4058, 726162.16, 726162.71, 726162.125, 726162.18, 726162.235 ])
Here's a sample of two data packets that are ~10 seconds apart:
datasample10 = np.array(['b1a2f9ea 000012ea 00040a88 00000000 0a 02 00c230d4 007671a6 fd1c2538 002b512e 021b9f7c 01f14944',
'b1a39a8e 000015db 00040a88 00000000 0a 02 00c1d26c 0076b032 fd1c3554 002d51b2 021bd5a0 01f0cd92'],
dtype='|S98')
timesample10 = np.array([727599.2538, 727609.2702])
The 12-bits can represent 2**12
different numbers. It could represent the integers from 0
to 2**12 - 1
(i.e. 4095
).
If we were to take the decimal string representation of that integer and directly convert it to the fractional part of a second,
then we would only be able to represent fractional seconds from 0
to 0.4096
. That doesn't seem right.
To spread the fractional parts evenly between 0
and 1
we would want to divide by 4096
:
import numpy as np
datasample = np.array(['b1491fda 00001017 00040a88 00000000 0a 02 00c24d18 0076dd10 fd13fe3c 0032d8ce 0222c71a 01f0f0fa',
'b1492010 00001018 00040a88 00000000 0a 02 00c249aa 0076dbee fd148e86 0032dc34 02235336 01f0f3c8',
'b1492047 00001019 00040a88 00000000 0a 02 00c2463c 0076dacc fd151ed0 0032df9a 0223df52 01f0f696',
'b149207d 0000101a 00040a88 00000000 0a 02 00c248d0 0076da0a fd13fff4 003265b8 02239a24 01f0f3e0',
'b14920b4 0000101b 00040a88 00000000 0a 02 00c248d0 0076da0a fd13fff4 003265b8 02239a24 01f0f3e0',
'b14920eb 0000101c 00040a88 00000000 0a 02 00c1eed0 0076a812 fd148d00 0032b896 022396fe 01f0b4ac'],
dtype='|S98')
print(np.array([int(row[:8],16) for row in datasample]) / 2**12)
yields
[726161.99072266 726162.00390625 726162.01733398 726162.03051758 726162.04394531 726162.05737305]
This has the nice property that the timestamps are all increasing:
result = (np.array([int(row[:8],16) for row in datasample]) / 2**12)
print(np.diff(result))
# [0.01318359 0.01342773 0.01318359 0.01342773 0.01342773]
And datasample10
maps to timestamps which are about 10 seconds apart:
datasample10 = np.array(['b1a2f9ea 000012ea 00040a88 00000000 0a 02 00c230d4 007671a6 fd1c2538 002b512e 021b9f7c 01f14944',
'b1a39a8e 000015db 00040a88 00000000 0a 02 00c1d26c 0076b032 fd1c3554 002d51b2 021bd5a0 01f0cd92'],
dtype='|S98')
result = (np.array([int(row[:8],16) for row in datasample10]) / 2**12)
print(np.diff(result))
# [10.04003906]
I'm not sure how these hex strings are supposed to be interpreted as seconds
since the power was turned on or since 2010-1-1.
If they are meant to represent seconds since the power was turned on, and presumably the data you posted came soon after,
then one way to make the numbers reasonable in size would be to take the integers modulo 2**20
-- i.e. modulo 20 bits:
result = (np.array([int(row[:8],16) for row in datasample]) % 2**20 / 2**12)
print(result)
# [145.99072266 146.00390625 146.01733398 146.03051758 146.04394531 146.05737305]
But if this is correct, then of the original 32 bits, only the rightmost 20-bits are being used, 12-bits for the fractional part and 8-bits for the whole seconds. That means the maximum timestamp would only be 256 before the values cycle back to 0. That seems rather limited, so I'm not confident that this is the right way to interpret "modulo 20-bits".
On the othe hand, if the timestamps are meant to represent seconds since 2010-1-1, then we should expect integers around 301000000:
import datetime as DT
print((DT.datetime.now() - DT.datetime(2010,1,1)).total_seconds())
# 301151491.085063
I haven't managed to guess a mapping from datesample to timestamps in the 301 million range which preserves monotonicity and the known 10-second interval.
Well, we could just add 301 million to our current formula, but that would be totally contrived...