Search code examples
algorithmmatlabaugmented-realitymotion-detectionopticalflow

Viewfinder Alignment


Is there anyone who worked with Viewfinder Alignment method? The first step (Edge Detection) is more or less understandable. It's written that "to extract edges we take the squared gradient of the image in four equally spaced directions: horizontal, vertical, and the two diagonal directions." (1). And "we then perform an integral projection of each gradient image in the direction perpendicular to the direction of the gradient" (2). For horizontal direction I implemented that algorithm this way:

function pl = horgrad(a)
[h,w] = size(a);
b = uint8(zeros(h,w));
for i = 1 : h
        for j = 2 : w
                % abs() instead of squaring
                b(i,j) = abs(a(i,j) - a(i,j-1));     % (1)
        end
end
pl = sum(b);     % (2)

The real problem for me is the second step: Edge Alignment. What mean px[i]1, py[i]1, pu[i]1 and pv[i]1? Why are they equal to 1? How does i-counter change?


Solution

  • As I understand the algorithm, px, py, pu and pv are integral projections into each of 4 directions. So, px is pl in your code. px[i]0 is every point in this vector - pl(i) in the code. px[i]1 is to get total number of points used to generate the projection (normalization coefficient?). So the sum of all px[i]1 will be the image height h. For other direction it's similar.

    Repeating my comment to your question, for better performance you should try to avoid loops, specially nested loops, specially when it is as easy as in your case:

    b(:,2:end)=abs(diff(a,1,2));