Search code examples
image-processingblobpixelchainrun-length-encoding

Pixel chains from run length encoding


I've been banging my head for a long time on this one

I am doing imaging. So far I've binarized my images, meaning that from a grayscale image, every pixel under a certain value are dropped. This gives me only some regions out of the original image with a lot of "zero pixels" all around those regions.

Next I've run length encoded my regions into "blobs". Runs are a method of compression for data. For example, suppose that you have binarized a square, the you will have only a few runs describing the whole image. The runs are defined by x,y coordinates and a length.

When recreating the image, for each run, go to x,y coordinate and add pixels on the x axis for the length of the run.

Now I have to take the runs and create a chain out of it that will describe the contour of the region. I don't know how to do that.

I have a bunch of x,y,length runs and I have to "navigate" around the edges to form a chain. Normally in imaging this process is done with the original image but I can't use the original image anymore here so I have to compute it with the runs.

I know this looks like a big wall of text but I don't know how to ask this question better.

Any hints or pointers on identical implementation would be awesome.

EDIT

thanks to unwind, Ill link a few images :

alt text
(source: tudelft.nl)

In this example, they process the image B into the contour C (which I call chain). However I'd like to generate the contour from D, the Run Lengths


Solution

  • Well I lost that contract but the answer was to use the Freeman Chain Coding technique

    The fact that it is run lengths encoding has nothing to do with the algorithm, unlike I previously thought.