I'm working with huge data files (several hundred MBs) and need to be as efficient as possible. I'm using a lapply function to load all files into a list, but due to the nature of the file origin there are a couple columns that I don't need.
dfs <- list.files(pattern="*.txt")
dfss <- lapply(dfs,read.table)
I normally use a drop=c("ID","num")
command with read.table
:
file <- read.table(drop=c("ID","num"))
But it won't work here. Any suggestions?
What about :
dfss <- lapply(dfs,read.table,drop=c("ID","num"))