Search code examples
pythonnumpyopencvdisparity-mappingdepth-testing

Faster iteration on for loop with 2d arrays


I have a problem with optimization to compute errors for disparity map estimation.

To compute errors I create a class with called methods for each error. I need to iterate for every pixel to get an error. This arrays are big cause I'm iterating in size of 1937 x 1217 images. Do you know how to optimize it?

Here is code of my method:

EDIT:

def mreError(self):
    s_gt = self.ref_disp_norm
    s_all = self.disp_bin
    s_r = self.disp_norm 

    s_gt = s_gt.astype(np.float32)
    s_r = s_r.astype(np.float32)
    n, m = s_gt.shape
    all_arr = []

    for i in range(0, n):
        for j in range(0, m):

            if s_all[i,j] == 255:
                if s_gt[i,j] == 0:
                    sub_mre = 0
                else:   
                    sub_mre = np.abs(s_gt[i,j] - s_r[i,j]) / s_gt[i,j]
                all_arr.append(sub_mre)

    mre_all = np.mean(all_arr)
    return mre_all

Solution

  • A straight up vectorisation of your method would be

    def method_1(self):
        # get s_gt, s_all, s_r
        sub_mre = np.zeros((s_gt.shape), dtype=np.float32)
        idx = s_gt != 0
        sub_mre[idx] = np.abs((s_gt[idx] - s_r[idx]) / s_gt[idx])
        return np.mean(sub_mre[s_all == 255])
    

    But since you're doing your averaging only for pixels where s_all is 255, you could also filter for those first and then do the rest

    def method_2(self):
        idx = s_all == 255
        s_gt = s_gt[idx].astype(np.float32)
        s_r = s_r[idx].astype(np.float32)
        sub_mre = np.zeros_like(s_gt)
        idx = s_gt != 0
        sub_mre[idx] = np.abs((s_gt[idx] - s_r[idx]) / s_gt[idx])
        return np.mean(sub_mre)
    

    Personally, I would favour the first method unless the second one results in a much faster result. Calling the function only once and spending, for example, 40 ms vs 5 ms is not noticeable and the readability of the function matters more.