Search code examples
pythonopencvimage-processingcolorscolor-space

Python HSI to RGB conversion - not what I expect


I am trying to work on HSV -> RGB with formula manually and without openCV.

I tried to calculate HSV -> RGB color space, and gives me almost the same picture I wanted, but there is some kind of noise.

Some image has many noise, and some really don't have noise(there are "some" noises.)

And I tried debugging.. and can't seem to know what is wrong with my code.

is it the Formulla I wrote ? Or did I miss something?..

I can't know what I don't know. So, I need help..

here's my code, and I'll post original images and result images that have noises below.

First, here's my code.


def HSI_to_bgr(h, s, i):
    h = degrees(h)
    if 0 < h <= 120 :
        b = i * (1 - s)
        r = i * (1 + (s * cos(radians(h)) / cos(radians(60) - radians(h))))
        g = i * 3 - (r + b)
    elif 120 < h <= 240:
        h -= 120
        r = i * (1 - s)
        g = i * (1 + (s * cos(radians(h)) / cos(radians(60) - radians(h))))
        b = 3 * i - (r + g)
    elif 0 < h <= 360:
        h -= 240
        g = i * (1 - s)
        b = i * (1 + (s * cos(radians(h)) / cos(radians(60) - radians(h))))
        r = i * 3 - (g + b)
    return [b, g, r]


def rgb_to_hue(b, g, r):
    angle = 0
    if b != g != r:
        angle = 0.5 * ((r - g) + (r - b)) / sqrt(((r - g) ** 2) + (r - b) * (g - b))
    if b <= g:
        return acos(angle)
    else:
        return 2 * pi - acos(angle)


def rgb_to_intensity(b, g, r):
    val = (b + g + r) / 3.
    if val == 0:
        return 0
    else:
        return val


def rgb_to_saturity(b, g, r):
    if r + g + b != 0:
        return 1. - 3. * np.min([r, g, b]) / (r + g + b)
    else:
        return 0




def point_process_colorscale_negative_intensity(file_path):
    src = cv2.imread(file_path, cv2.IMREAD_COLOR)

    height, width = src.shape[0], src.shape[1]
    new_image = np.zeros((height, width, 3), dtype=np.uint8)
    I = np.zeros((height, width))
    S = np.zeros((height, width))
    H = np.zeros((height, width))

    for i in range(height) :
        for j in range(width) :
            b = src[i][j][0] / 255.
            g = src[i][j][1] / 255.
            r = src[i][j][2] / 255.
            H[i][j] = rgb_hsi_conversion.rgb_to_hue(b, g, r)
            S[i][j] = rgb_hsi_conversion.rgb_to_saturity(b, g, r)
            I[i][j] = rgb_hsi_conversion.rgb_to_intensity(b, g, r)
            # I[i][j] = 1. - I[i][j]

            bgr_tuple = rgb_hsi_conversion.HSI_to_bgr(H[i][j], S[i][j], I[i][j])

            new_image[i][j][0] = round(bgr_tuple[0] * 255.)
            new_image[i][j][1] = round(bgr_tuple[1] * 255.)
            new_image[i][j][2] = round(bgr_tuple[2] * 255.)

    return new_image, src

Here's my result Image.

enter image description here

enter image description here

enter image description here

They all have some noises. especially, the baboon's nose and the kid's purple noise is worse.

Thank you in advance, and if I did not give enough information, I will try my best to add.


Solution

  • The main issue is the line b != g != r, that means b != g or g != r, and supposed to be: if b == g and g == r return 0

    For debugging the issue, you can find a pixel that gives wrong output value (by comparing new_image and src).
    Find the b, g, r values of that pixel.
    Implement a small peace of code for debugging the specific b, g, r values.
    Use the debugger for finding were things goes wrong.

    Here is code sample used for debugging the values b, g, r = 74, 74, 229:

    b = 74 / 255.
    g = 74 / 255.
    r = 229 / 255.
    H = rgb_to_hue(b, g, r)
    S = rgb_to_saturity(b, g, r)
    I = rgb_to_intensity(b, g, r)
    bgr_tuple = HSI_to_bgr(H, S, I)
    new_b = round(bgr_tuple[0] * 255.)
    new_g = round(bgr_tuple[1] * 255.)
    new_r = round(bgr_tuple[2] * 255.)
    

    That's how I figured out there is a problem with b != g != r (because r==g but g!=r).

    Remember, that there are many cases were it's easier to write some code for finding a bug (instead of debugging the original code).


    Corrected code:

    import cv2
    import numpy as np
    from math import sqrt, cos, acos, degrees, radians, pi
    
    def HSI_to_bgr(h, s, i):
        h = degrees(h)
        if 0 <= h <= 120 :
            b = i * (1 - s)
            r = i * (1 + (s * cos(radians(h)) / cos(radians(60) - radians(h))))
            g = i * 3 - (r + b)
        elif 120 < h <= 240:
            h -= 120
            r = i * (1 - s)
            g = i * (1 + (s * cos(radians(h)) / cos(radians(60) - radians(h))))
            b = 3 * i - (r + g)
        elif 0 < h <= 360:
            h -= 240
            g = i * (1 - s)
            b = i * (1 + (s * cos(radians(h)) / cos(radians(60) - radians(h))))
            r = i * 3 - (g + b)
        return [b, g, r]
    
    
    def rgb_to_hue(b, g, r):
        if (b == g == r):
            return 0
    
        angle = 0.5 * ((r - g) + (r - b)) / sqrt(((r - g) ** 2) + (r - b) * (g - b))
        if b <= g:
            return acos(angle)
        else:
            return 2 * pi - acos(angle)
    
    
    def rgb_to_intensity(b, g, r):
        val = (b + g + r) / 3.
        if val == 0:
            return 0
        else:
            return val
    
    
    def rgb_to_saturity(b, g, r):
        if r + g + b != 0:
            return 1. - 3. * np.min([r, g, b]) / (r + g + b)
        else:
            return 0
    
    
    
    
    def point_process_colorscale_negative_intensity(file_path):
        src = cv2.imread(file_path, cv2.IMREAD_COLOR)
    
        height, width = src.shape[0], src.shape[1]
        new_image = np.zeros((height, width, 3), dtype=np.uint8)
        I = np.zeros((height, width))
        S = np.zeros((height, width))
        H = np.zeros((height, width))
    
        for i in range(height):
            for j in range(width):
                b = src[i][j][0] / 255.
                g = src[i][j][1] / 255.
                r = src[i][j][2] / 255.
                H[i][j] = rgb_to_hue(b, g, r)
                S[i][j] = rgb_to_saturity(b, g, r)
                I[i][j] = rgb_to_intensity(b, g, r)
    
                bgr_tuple = HSI_to_bgr(H[i][j], S[i][j], I[i][j])
    
                new_image[i][j][0] = np.clip(round(bgr_tuple[0] * 255.), 0, 255)
                new_image[i][j][1] = np.clip(round(bgr_tuple[1] * 255.), 0, 255)
                new_image[i][j][2] = np.clip(round(bgr_tuple[2] * 255.), 0, 255)
    
        return new_image, src
    
    
    new_image, src = point_process_colorscale_negative_intensity('mandrill.png')  # The mandrill image I used is from MATLAB.
    
    cv2.imwrite('new_image.png', new_image)  # Save new_image for testing
    
    cv2.imshow('new_image', new_image)  # Show new_image for testing
    cv2.imshow('abs diff*50', np.minimum(cv2.absdiff(src, new_image), 5)*50)  # Show absolute difference of (src - new_image) multiply by 50 for showing small differences.
    cv2.waitKey()
    cv2.destroyAllWindows()
    

    In case there are still issues, try debugging with the small piece of code...


    new_image:
    enter image description here