I am trying to get the maximum and minimum pixel values of a tif raster file.
I am using the GetMaximum() and GetMinimum() methods but they return nothing, thus the error I get is:
TypeError: unsupported operand type(s) for -: 'NoneType' and 'NoneType'
The in file is a small tif raster for testing purposes. I do get the same error with other rasters that actually did work just a couple of days ago.
This is what I have been trying:
from osgeo import gdal
driver = gdal.GetDriverByName('GTiff')
in_file = gdal.Open("L8_field.tif")
band1 = in_file.GetRasterBand(1)
barray = band1.ReadAsArray()
# Getting the interval value and setting the classes
max_value = band1.GetMaximum()
min_value = band1.GetMinimum()
tot_classes = 5
class_1 = class_x + min_value
class_x = (max_value - min_value) / tot_classes
class_2 = (class_x * 2) + min_value
class_3 = (class_x * 3) + min_value
class_4 = (class_x * 4) + min_value
class_5 = max_value
...
I'm using this to classify the raster. So there is more code that actually classifies it. This snippet is meant to grab the equal intervals.
What am I missing?
I don't know exactly why RasterBand.GetMaximum()
and RasterBand.GetMinimum()
return None
. Perhaps a raster attribute table needs to be computed before GTiff can access that value on the fly?
You can do similar operations on the numpy.ndarray
variable barray
.
import numpy as np
# ...
max_value = np.amax(barray)
min_value = np.amin(barray)
The obvious problem here is if the nodata value is either the min or max, in which case that would be returned to your displeasure. Landsat has nodata=0.
You can solve that by making a masked array and using the masked array's min
and max
methods.
import numpy as np
# ...
masked_arr = np.ma.MaskedArray(barray, mask=(barray==0))
max_value = masked_arr.max()
min_value = masked_arr.min()