I want to ask what probably is a basic question about the way in which R and QGIS import raster files.
I have a single band-raster. When I import it into R, using the "raster" function of the raster package, I get this range of pixel values:
class : RasterLayer
dimensions : 10980, 10980, 120560400 (nrow, ncol, ncell)
resolution : 10, 10 (x, y)
extent : 6e+05, 709800, 5590200, 5700000 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=31 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /data/MTDA/CGS_S2_RADIOMETRY/2017/10/15/S2B_20171015T104525Z_31UFS_TOC_V100/S2B_20171015T104525Z_31UFS_TOC-B02_10M_V100.tif
names : S2B_20171015T104525Z_31UFS_TOC.B02_10M_V100
values : -32768, 32767 (min, max)
When I stack this layer in a raster brick, I get these min-max values:
class : RasterLayer
band : 2 (of 11 bands)
dimensions : 10980, 10980, 120560400 (nrow, ncol, ncell)
resolution : 10, 10 (x, y)
extent : 6e+05, 709800, 5590200, 5700000 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=31 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0
data source : /tmp/Rtmp882dZS/raster/r_tmp_2017-11-10_172819_11532_86514.grd
names : S2B_20171015T104525Z_31UFS_TOC.B02_10M_V100
values : -1129, 9994 (min, max)
However, if I load the same raster in QGIS, the min value is 228 and the max value is 907 (I calculated these values with the options "Extent: Full" and "Accuracy: Actual (slower)".
So, where do these differences come from? I do not understand exactly what R and QGIS are doing...
In the end, I found what is the difference about! When asking to R, I get the real min/max values. QGIS, instead, calculates the min/max values with a cumulative count. When I set the "Load min/max values" (in the Raster Properties window) to "Min/Max" I got the same values R showed.