I have data frame with 12511 rows and 16 columns obtained from animal running experiment. Each row representing running counts every minute for each animal. I would like to sum the columns on every 60th row (that is counts per hour).
I tried to use apply function for summing 60 rows:
apply(rw[1:60,],2,sum)
apply(rw[61:120,],2,sum)
apply(rw[121:180,],2,sum)
... keeping to do this until 12511 is unthinkable and time consuming.
I am sure there is a smart way to condense my data to 208 rows. Please help!!
Thank you.
Here's an approach using data.table
package and the vectorized colSums
function
Some data first:
set.seed(123)
rw <- data.frame(a = sample(12511), b = sample(12511), c = sample(12511))
Then, we will create and index using gl
and run colSums
per group
library(data.table)
setDT(rw)[, as.list(colSums(.SD)), by = gl(ceiling(12511/60), 60, 12511)]
# gl a b c
# 1: 1 378678 387703 388143
# 2: 2 384532 331275 341092
# 3: 3 355397 367039 369012
# 4: 4 378483 355384 367988
# 5: 5 365193 372779 388020
# ---
# 205: 205 385361 409004 389946
# 206: 206 407232 406940 345496
# 207: 207 363253 357317 356878
# 208: 208 387336 383786 348978
# 209: 209 186874 188616 183500
Another similar approach would be
setDT(rw)[, lapply(.SD, sum), by = gl(ceiling(12511/60), 60, 12511)]
Or using dplyr
s summarise_each
function, could similarly do
library(dplyr)
rw %>%
group_by(indx = gl(ceiling(12511/60), 60, 12511)) %>%
summarise_each(funs(sum))
# Source: local data table [209 x 4]
#
# indx a b c
# 1 1 378678 387703 388143
# 2 2 384532 331275 341092
# 3 3 355397 367039 369012
# 4 4 378483 355384 367988
# 5 5 365193 372779 388020
# 6 6 387260 386737 347777
# 7 7 343980 412633 383429
# 8 8 355059 352393 336798
# 9 9 372722 386863 425622
# 10 10 406628 370606 362041
# .. ... ... ... ...