Consider I have a input df with a timestamp field column and when setting window duration (with no sliding interval) as :
10 minutes
with input of time(2019-02-28 22:33:02)
window formed is as (2019-02-28 22:30:02) to (2019-02-28 22:40:02)
8 minutes
with same input of time(2019-02-28 22:33:02)
window formed is as (2019-02-28 22:26:02) to (2019-02-28 22:34:02)
5 minutes
with same input of time(2019-02-28 22:33:02)
window formed is as (2019-02-28 22:30:02) to (2019-02-28 22:35:02)
14 minutes
with input of time(2019-02-28 22:33:02)
window formed is as (2019-02-28 22:32:02) to (2019-02-28 22:46:02)
How does spark calculates the start time of a window with a given input of ts ?
This is explained in the section "Understanding How Intervals are computed" in the "Stream Processing with Apache Spark" book published by O'Reilly:
"The window intervals are aligned to the start of the second/minute/hour/day that corresponds to the next" upper time magnitude of the time unit used."
In your case you are always using minutes so the next upper time magnitude is "hour". Therefore it tries to reach the start of the hour. Your cases in more details (forget about the 2 seconds, this is just a delay in the internals):
It is independent of the incoming data and its timestamp/event_time.
As an additional node, the lower window boundary is inclusive whereas the upper one is exclusive. In mathematical notations this would look like [start_time, end_time)
.