Currently I am using the below to read in ~7-10 files to the R console all at once.
library(magrittr)
library(feather)
list.files("C:/path/to/files",pattern="\\.feather$") %>% lapply(read_feather)
How can I pipe these into independent data objects based on their unique file names?
ex.
accounts_jan.feather
users_jan.feather
-> read feather function -> hold in working memory as:
accounts_jan_df
users_jan_df
Thanks.
This seems like a case of trying to do too much with a pipe (https://github.com/hrbrmstr/rstudioconf2017/blob/master/presentation/writing-readable-code-with-pipes.key.pdf). I would recommend segmenting your process a little:
# Get vector of files
files <- list.files("C:/path/to/files", pattern = "\\.feather$")
# Form object names
object_names <-
files %>%
basename %>%
file_path_sans_ext
# Read files and add to environment
lapply(files,
read_feather) %>%
setNames(object_names) %>%
list2env()
If you really must do this with a single pipe, you should use mapply
instead, as it has a USE.NAMES
argument.
list.files("C:/path/to/files", pattern = "\\feather$") %>%
mapply(read_feather,
.,
USE.NAMES = TRUE,
SIMPLIFY = FALSE) %>%
setNames(names(.) %>% basename %>% tools::file_path_sans_ext) %>%
list2env()
Personally, I find the first option easier to reason with when I go to do debugging (I'm not a fan of pipes within pipes).