Search code examples
pythonopencvmachine-learningdeep-learningcomputer-vision

Uniformity of color and texture in image


I am new to the field of deep learning and have a problem in determining whether two images have uniform color and texture. For example, I have a

Master image -

MASTER IMAGE

Now, with respect to this image i need to determine whether the following images have uniform texture and color distributions -

image 1 -

Picture Number 1

image 2 -

Picture Number 2

image 3 -

Picture number 3

I need to develop an algorithm which will evaluate these 3 images with the master image. The algorithm should approve the image 1 and reject image2 because of its color and image 3 because of color and texture uniformity.

My approach for the problem was directly analyzing image for texture detection. I found that Local Binary Patterns method was good among all texture recognition methods (but I am not sure). I used its skimage implementation with opencv in python and found that the method worked.

from skimage import feature
import numpy as np
import cv2
import matplotlib.pyplot as plt

class LocalBinaryPatterns:
    def __init__(self, numPoints, radius):
        # store the number of points and radius
        self.numPoints = numPoints
        self.radius = radius

    def describe(self, image, eps=1e-7):
        # compute the Local Binary Pattern representation
        # of the image, and then use the LBP representation
        # to build the histogram of patterns
        lbp = feature.local_binary_pattern(image, self.numPoints,
            self.radius, method="uniform")
        (hist, _) = np.histogram(lbp.ravel(),
            bins=np.arange(0, self.numPoints + 3),
            range=(0, self.numPoints + 2))

        # normalize the histogram
        hist = hist.astype("float")
        hist /= (hist.sum() + eps)

        # return the histogram of Local Binary Patterns
        return hist


desc = LocalBinaryPatterns(24, 8)

image = cv2.imread("main.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
hist = desc.describe(gray)

plt.plot(hist,'b-')
plt.ylabel('Feature Vectors')
plt.show()

It detected the features and made a histogram of feature vectors. I plotted the histogram using matplotlib and clearly found that image 1 and image 2 texture features were almost similar to the master image. And image 3 texture features were not matching.

Then I started analyzing images for their color. I plotted the color histograms using opencv as -

import cv2
from matplotlib import pyplot as plt

def draw_image_histogram(image, channels, color='k'):
    hist = cv2.calcHist([image], channels, None, [256], [0, 256])
    plt.plot(hist, color=color)
    plt.xlim([0, 256])

def show_color_histogram(image):
    for i, col in enumerate(['b', 'g', 'r']):
        draw_image_histogram(image, [i], color=col)
    plt.show()

show_color_histogram(cv2.imread("test1.jpg"))

I found that color histogram of image 1 matched with master image. And color histograms of image 2 and 3 did not matched. In this way I figured out that image 1 was matching and image 2 and 3 were not.

But, I this is pretty simple approach and I have no idea about the false positives it will match. Moreover I don't know the approach for the problem is the best one.

I also want this to be done by a single and robust algorithm like CNN (but should not be computationally too expensive). But I have no experience with CNNs. So should I train a CNN with master images?. Please point me in the right direction. I also came across LBCNNs, can they solve the problem?. And what can be other better approaches.

Thank you so much for the help


Solution

  • CNN are good on capture the underlying features and distribution of data-set. But they need large(hundreds of thousands examples) to learn and extract those features, which is very expensive task. Also for high-res images, it will need more parameters to extract those features, which further demand for more data.

    If you have large data-set, you can prefer CNN, which can capture tiny bit information such as these fine texture. Otherwise, these classical methods(one you have performed) also works good.

    There is also method called transfer-learning, where we use pre-trained model(which trained on similar data-set) and fine tune it on our small data-set. If you can find any such model, that can be another option.