I am trying to manually identify/correct trees using LiDAR data (1.7 GB object) and a tree tops
object via the locate_trees
function. Part of the problem is:
4 GB Nvidia 3050
should be able to handle it.Does rgl automatically use the GPU
or does it default to the integrated graphics on the motherboard? Is there a way to fasten up the rendering?
My other system specs are Corei9 (14 threads) and 64 GB RAM. Moreover, I am using R
4.2.1
.
Code:
library(lidR)
# Import LiDAR data
LiDAR_File = readLAS("path/file_name.las")
# Find tree tops
TTops = find_trees(LiDAR_File , lmf(ws = 15, hmin = 5))
# Manually correct tree identification
TTops_Manual = locate_trees(LiDAR_File , manual(TTops)) # This is where rgl rendering becomes too slow if there are too many points involved.
There were two problems here. First, the lidR::manual()
function which is used to select trees has a loop where one sphere is drawn for each tree. By default rgl
will redraw the whole scene after each change; this should be suppressed. The patch in https://github.com/r-lidar/lidR/pull/611 fixes this. You can install a version with this fix as
remotes::install_github("r-lidar/lidR")
Second, rgl
was somewhat inefficient in drawing the initial point cloud of data, duplicating the data unnecessarily. When you have tens of millions of points, this can exhaust all R memory, and things slow to a crawl. The development version of rgl
fixes this. It's available via
remotes::install_github("dmurdoch/rgl")
The LiDAR images are very big, so you might find you still have problems even with these changes. Getting more regular RAM will help R: you may need this if the time to the first display is too long. After the first display, almost all the work is done in the graphics system; if things are still too slow, you may need a faster graphics card (or more memory for it).