Search code examples
rdatetimefrequency

How to handle irregularly spaced timeseries and returns a regularly spaced one


I have a tick-by-tick dataset of stock prices over a period, and I want to convert the high-frequency, irregular-spaced data into a lower frequency, regularly-spaced time series for data analysis later on. I'm using R here.

The data tracks the value of a particular stock for every transaction/quote at the frequency of 1 second. So for example, at datetime 2009-07-16 13:30:01 (referring to the data below), there are two quotes valued at 145.88 and 145.89 during this second.

                   Date   Value
2009-07-16T13:30:01.000  145.88
2009-07-16T13:30:01.000  145.89
2009-07-16T13:30:02.000  145.57
2009-07-16T13:30:02.000  145.75
2009-07-16T13:30:03.000  145.65
2009-07-16T13:30:03.000  145.84
2009-07-16T13:30:03.000 145.776
2009-07-16T13:30:04.000  145.74
2009-07-16T13:30:04.000  145.68
2009-07-16T13:30:04.000  145.68
2009-07-16T13:30:04.000  145.76
2009-07-16T13:30:04.000  145.68
.
.
.

First I would like to convert the data into a regular spaced time series, where it shows only the latest value of the stock for each second:

                   Date   Value
2009-07-16T13:30:01.000  145.89
2009-07-16T13:30:02.000  145.75
2009-07-16T13:30:03.000 145.776
2009-07-16T13:30:04.000  145.68
2009-07-16T13:30:05.000  145.76
2009-07-16T13:30:06.000  145.85
2009-07-16T13:30:07.000   145.8
2009-07-16T13:30:08.000  145.62
2009-07-16T13:30:09.000  145.85
2009-07-16T13:30:10.000  145.64
.
.
.

But most importantly, I want to convert the data into a regular spaced AND lower frequency time series, say 1 min, where it shows the latest value of the stock for each minute:

                   Date  Value
2009-07-16T13:31:00.000 145.89
2009-07-16T13:32:00.000 145.53
2009-07-16T13:33:00.000 145.68
2009-07-16T13:34:00.000 145.14
2009-07-16T13:35:00.000  145.7
2009-07-16T13:36:00.000 145.83
2009-07-16T13:37:00.000 145.88
2009-07-16T13:38:00.000 145.62
2009-07-16T13:39.00.000 145.84
2009-07-16T13:40:00.000 145.63
.
.
.

I have tried using aggregatets() from the highfrequency package but it doesn't return the results I want. The datetime are not regular-spaced and not of lower frequency, even though I have specified 1 minute as shown in my code.

library(lubridate)
library(dplyr)
data$Date <- ymd_hms(data$Date)

library(xts)
data_xts <- as.xts(data[,"Value"], order.by=data[,"Date"])

library(highfrequency)
data_new <- aggregatets(data_xts, on="minutes", k=1)

How do I do this in R?


Solution

  • Do the aggregating before.

    What you've got is this.

    > head(df1, 10)
                          date    value
    1  2019-02-02T13:59:38.000 145.8922
    2  2019-02-02T13:59:38.000 145.8820
    3  2019-02-02T13:59:38.000 145.7998
    4  2019-02-02T13:59:39.000 145.8122
    5  2019-02-02T13:59:39.000 145.7355
    6  2019-02-02T13:59:39.000 145.7822
    7  2019-02-02T13:59:40.000 145.7078
    8  2019-02-02T13:59:41.000 145.7133
    9  2019-02-02T13:59:41.000 145.6906
    10 2019-02-02T13:59:41.000 145.8749
    

    Now we use aggregate() to get the latest value of each second (i.e. the highest row number of each second).

    df1.sec <- aggregate(value ~ date, df1, FUN=function(x) x[length(x)])
    > head(df1.sec, 10)
                          date    value
    1  2019-02-02T13:59:38.000 145.7998
    2  2019-02-02T13:59:39.000 145.7822
    3  2019-02-02T13:59:40.000 145.7078
    4  2019-02-02T13:59:41.000 145.8749
    5  2019-02-02T13:59:42.000 145.7630
    6  2019-02-02T13:59:43.000 145.7921
    7  2019-02-02T13:59:44.000 145.6459
    8  2019-02-02T13:59:45.000 145.7680
    9  2019-02-02T13:59:46.000 145.7966
    10 2019-02-02T13:59:47.000 145.8542
    

    Then we do the same with the minutes by cutting away he seconds with substr().

    df1.min <- aggregate(value ~ substr(date, 1, 16), df1.sec, FUN=function(x) x[length(x)])
    > head(df1.min, 10)
       substr(date, 1, 16)    value
    1     2019-02-02T13:59 145.8073
    2     2019-02-02T14:00 145.6909
    3     2019-02-02T14:01 145.8617
    4     2019-02-02T14:02 145.7452
    5     2019-02-02T14:03 145.7080
    6     2019-02-02T14:04 145.8530
    7     2019-02-02T14:05 145.9772
    8     2019-02-02T14:06 145.8247
    9     2019-02-02T14:07 145.9125
    10    2019-02-02T14:08 145.6915
    

    (Note: If it matters, to prevent the weird column name "substr(date, 1, 16)" we could also do:)

    # with(df1.sec, aggregate(list(value=value), by=list(date=substr(date, 1, 16)),
    #                         FUN=function(x) x[length(x)]))
    # #                date    value
    # # 1  2019-02-03T09:43 146.0894
    # # 2  2019-02-03T09:44 145.7456
    # # ...
    

    xts() wants e.g. POSIXct format, so we convert it.

    df1.min$date.POSIX <- as.POSIXct(df1.min$`substr(date, 1, 16)`, format="%FT%H:%M")
    

    Now we can set the xts object on clean data.

    library(xts)
    data_xts <- xts(df1.min$value, order.by=df1.min$date.POSIX)
    

    Result

    > data_xts
                            [,1]
    2019-02-02 13:59:00 145.8073
    2019-02-02 14:00:00 145.6909
    2019-02-02 14:01:00 145.8617
    2019-02-02 14:02:00 145.7452
    2019-02-02 14:03:00 145.7080
    2019-02-02 14:04:00 145.8530
    2019-02-02 14:05:00 145.9772
    2019-02-02 14:06:00 145.8247
    2019-02-02 14:07:00 145.9125
    2019-02-02 14:08:00 145.6915
    

    Toy Data

    set.seed(42)
    date <- as.POSIXct(unlist(sapply(as.matrix(1:1000), function(x) 
      rep(x, sample(1:3, 1))))[1:1000], origin=Sys.time())
    df1 <- data.frame(date=date,
                      value=rnorm(1000, 145.8, 0.08962))
    df1$date <- strftime(df1$date, format="%FT%H:%M:%S.000")