Search code examples
c++algorithmopencvimage-processingdithering

Floyd-Steinberg dithering image in grayscale based on input color


I want to create a Floyd-Steinberg dithered image using a specific color as a base. At the end of the dithering process pixels that match the input color will be set to black, otherwise to white (or vice versa).
Like the example images below

Link 1

The example image above is using the following colors (RGB):

(255, 255, 255) white
(0, 215, 225) light_blue
(0, 120, 240) mid_blue
(0, 0, 120) dark_blue
(0, 0, 0) black
This is from https://www.lesswrong.com/posts/a2v5Syk6gJs7HnRoq/computational-thread-art

I have tried to replicated this dithering. But I didn't manage to get the same results. Example of my result.

Color light_blue :

Link 2

Color black :

Link 3

Here is the code for what I have so far.

#include <opencv2/opencv.hpp>

cv::Vec3b FindClosestPaletteColor_(cv::Vec3b Color, const std::vector<cv::Vec3b>& Palette) {
    int MinID = 0;
    cv::Vec3b Difference = Color - Palette[0];
    float MinDistance = cv::norm(Difference);
    for (size_t i = 1; i < Palette.size(); i++) {
        Difference = Color - Palette[i];
        float Distance = cv::norm(Difference);
        if (Distance < MinDistance) {
            MinDistance = Distance;
            MinID = i;
        }
    }
    return Palette[MinID];
}

cv::Mat FloydSteinbergDithering_(cv::Mat InputImage, const std::vector<cv::Vec3b>& ColorPalette) {
    cv::Mat Image = InputImage.clone();
    cv::Mat ResultImage = Image.clone();
    for (int i = 0; i < Image.rows; i++) {
        for (int j = 0; j < Image.cols; j++) {
            cv::Vec3b CurrentColor = Image.at<cv::Vec3b>(i, j);
            cv::Vec3b NewPixelColor = FindClosestPaletteColor_(CurrentColor, ColorPalette);
            ResultImage.at<cv::Vec3b>(i, j) = NewPixelColor;

            for (int k = 0; k < 3; k++) {
                int QuantError = (int)CurrentColor[k] - NewPixelColor[k];
                if (j + 1 < Image.cols)
                    Image.at<cv::Vec3b>(i, j + 1)[k] = cv::saturate_cast<uchar>(Image.at<cv::Vec3b>(i, j + 1)[k] + (7 * QuantError) / 16);
                if (i + 1 < Image.rows && j - 1 >= 0)
                    Image.at<cv::Vec3b>(i + 1, j - 1)[k] = cv::saturate_cast<uchar>(Image.at<cv::Vec3b>(i + 1, j - 1)[k] + (3 * QuantError) / 16);
                if (i + 1 < Image.rows)
                    Image.at<cv::Vec3b>(i + 1, j)[k] = cv::saturate_cast<uchar>(Image.at<cv::Vec3b>(i + 1, j)[k] + (5 * QuantError) / 16);
                if (i + 1 < Image.rows && j + 1 < Image.cols)
                    Image.at<cv::Vec3b>(i + 1, j + 1)[k] = cv::saturate_cast<uchar>(Image.at<cv::Vec3b>(i + 1, j + 1)[k] + (1 * QuantError) / 16);
            }
        }
    }
    return ResultImage;
}

int main() 
{

    cv::Mat InputImage = cv::imread("TestImage.jpg");
    if (InputImage.empty()) {
        std::cerr << "Error: Unable to load input image!" << std::endl;
        return -1;
    }

    //I have used white as secondary color. But want it to only have 1 color as input in the final version. So it can work with any color
    /*Black*/std::vector<cv::Vec3b> ColorPalette = { cv::Vec3b(0, 0, 0), cv::Vec3b(255, 255, 255) };
///*light Blue*/std::vector<cv::Vec3b> ColorPalette = { cv::Vec3b(225, 215, 0), cv::Vec3b(255, 255, 255) };
    cv::Mat ResultImage = FloydSteinbergDithering_(InputImage, ColorPalette);

    cv::imwrite("output_image.jpg", ResultImage);

    cv::cvtColor(ResultImage, ResultImage, cv::COLOR_BGR2GRAY);
    cv::bitwise_not(ResultImage, ResultImage);

    cv::imwrite("GrayScaledImage.jpg", ResultImage);

    cv::imshow("Result Image", ResultImage);
    cv::waitKey(0);
    cv::destroyAllWindows();

    return 0;
}

Input image

Link 4


Solution

  • Here’s my thoughts on dithering colors and clamping.

    When dithering, we create points of different colors, from a distance these colors mix and we perceive the patch as the average of the colors of the dots. One key takeaway here is that you cannot average two colors to get a third color unless that third color is on the line between the two colors in the color space (assuming linear RGB, the 3rd color will be somewhere on the straight line, in other color spaces the line doesn’t need to be straight). When dithering with 3 colors, we have a triangle of colors on a 2D plane that can be obtained by mixing them. In general, the convex hull of the set of colors (the palette) used for dithering represents all of the colors that we can create by mixing the colors in the palette.

    If the input image has colors outside of this convex hull, then those colors cannot be reproduced. The dithering process will introduce an error that will be propagated indefinitely, producing artifacts that might extend for a long time. Imagine a two color dithering with gray and black. The image has a white area. The dithering algorithm will fill this area with gray, and accumulate the difference between that gray and the white, which will cause an area to the right and bottom of this object to be much brighter than it should be, until the difference has been accounted for.

    The same would happen with any other color outside the convex hull of the palette.

    If the palette does not cover all the colors in the image, if the image has colors that cannot be formed by combinations of those colors, we need to get rid of those colors before dithering.

    Clamping the accumulated error is a workaround. It avoids the accumulated error being propagated too far, it prevents non-representable colors to produce huge artifacts. But it also will alter the colors that can be represented, and so it will not produce perfect colors.

    The best solution of course is to ensure that the palette we use for dithering always covers the whole gamut used in the image. Instead of picking colors that occur often in the image, we pick colors that form the extremes of the colors in the image. Including saturated red, yellow, green, cyan, blue and magenta, as well as pure white and black, will accomplish this for any RGB image.

    If the choice of colors is fixed, as is the case here, then the best solution is to pre-process the input image to remove colors outside of the convex hull of the palette, and replace them by colors inside. This is not a complex process, but needs a bit of thought to be done right.

    Clamping is a very simple and cheap solution, but it is sub-optimal.


    A quick example. I'm using an image with tweaked colors to make them brighter, so that the effect is more noticeable.

    input image

    Let's first dither using only black, white, red, green and blue. Note that you cannot make bright yellow using these colors, if you average red and green you get something more akin to orange or brown than yellow. The dithering process adds white to try to make things brighter, but it is still accumulating a large error that gets propagated down and to the right, changing the dark lettering and dark lines:

    dithering without yellow

    If we add pure yellow to the palette, errors do not accumulate, the yellow in the image can be formed by mixing colors from our palette:

    dithering with yellow

    When we dither without using yellow, and clip the error to be within the range [0 255] for each channel, the error doesn't accumulate as much, and so doesn't get propagated as far, yet it is still there, and we can still observe how the diagonal lines at the top-left and bottom-right of the sign, and the letters, appear thinner than they should be and have a green halo. There's also a bright line along the bottom-right side of the sign:

    dithering without yellow, clipping the error

    If we had clipped in input pixel values to be within the convex hull of the palette, there wouldn't be any error accumulating.