I was deriving NDVI (Normalized Difference Vegetation Index), which is a ratio of (NIR-R)/(NIR+R) where NIR is Near-Infrared band and R is Red band. This index ranges from -1 to 1. So I wrote a pyopencl code and here is what I have done and observed.
Python code:
import pyopencl as cl
import cv2
from PIL import Image
import numpy as np
from time import time
import matplotlib.pyplot as plt
#get kernel file
def getKernel():
kernel = open('kernel.c').read()
return kernel
#return images as numpy int32 arrays
def convToArray(im_r,im_nir):
a = np.asarray(im_r).astype(np.int32)
b = np.asarray(im_nir).astype(np.int32)
return a,b
#processing part
def getDerivation(platform,device,im_r,im_nir):
#setting device
pltfrm = cl.get_platforms()[platform]
dev = pltfrm.get_devices()[device]
cntx = cl.Context([dev])
queue = cl.CommandQueue(cntx)
#get 2Darrays
r,nir = convToArray(im_r,im_nir)
#shape of array
x = r.shape[1]
mf = cl.mem_flags
bs = time()
#input images buffer
inR = cl.Buffer(cntx,mf.READ_ONLY | mf.COPY_HOST_PTR,hostbuf=r)
inIR = cl.Buffer(cntx,mf.READ_ONLY | mf.COPY_HOST_PTR,hostbuf=nir)
#output image buffers
ndvi = cl.Buffer(cntx,mf.WRITE_ONLY,r.nbytes)
be = time()
print("Buffering time: " + str(be-bs) + " sec")
ts = time()
#load kernel
task = cl.Program(cntx,getKernel()%(x)).build()
#execute the process
task.derive(queue,r.shape,None,inR,inIR,ndvi)
#create empty buffer to store result
Vout = np.empty_like(r)
#dump output buffers to empty arrays
cl.enqueue_copy(queue,Vout,ndvi)
te = time()
#convert arrays to gray - image compatible formate
NDVI = Vout.astype(np.uint8)
print("Processing time: " + str(te - ts) + " On: " + pltfrm.name + " --> " + dev.name)
return NDVI
def process(platform,device,im_r,im_nir):
NDVI,NDBI,NDWI = getDerivation(platform,device,im_g,im_r,im_nir,im_swir)
print(NDVI)
cv2.imshow("NDVI",NDVI)
cv2.waitKey(0)
if __name__ == '__main__':
R = cv2.imread("BAND3.jpg",0)
NIR = cv2.imread("BAND4.jpg",0)
print(R.dtype) #returns uint8
process(0,0,R,NIR) #(0,0) is my intel gpu
kernel code(C):
__kernel void derive(__global int* inR,__global int* inIR,__global int* ndvi){
int x = get_global_id(0);
int y = get_global_id(1);
int width = %d;
int index = x + y*width;
//ndvi ratio (-1 to 1)
int a = ((inIR[index] - inR[index])/(inIR[index] + inR[index])) * (256);
a = (a < (0) ? (-1*a) : (a));
a = (a > (255) ? (255) : (a));
ndvi[index] = (a);
}
input image R:
input image NIR:
both the images have bit depth of 8
BUT I GET JUST A BLANK IMAGE. I wrote the result on the command line for debugging reasons initially, command line output:
(1151, 1151)
Buffering time: 0.015959739685058594 sec
Processing time: 0.22115755081176758 On: Intel(R) OpenCL --> Intel(R) HD Graphics 520
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
Now what i think is i may not be using proper datatype for the images? also, in the kernel the line ((inIR[index] - inR[index])/(inIR[index] + inR[index]))
will gives a float value, which i multiply with 256 to get a pixel value for that respective float value. So is it there the problem? Does any one know where i am going wrong?
Help is much appreciated!
Okay ... i got it. I just changed the datatype in the line a = np.asarray(im_r).astype(np.int32)
in the function convToArray()
to float32
and in the kernel file, i changed the parameter type to float
and added int a = (int)((((float)(inIR[index] - inR[index])/(float)(inIR[index] + inR[index]))+1)*127.5);
for the calculation. However, i need an explaination, why this worked and not the other way... I probably can think like, the result what we get after this calculation, int
type loses data while conversion from float
...is it?