Search code examples
imagematlabtransformdelaunaymatlab-cvst

is coordinate mapping same as pixel mapping in matlab for delaunay triangulation


I have to transform pixels from one image onto another image, by feature detection. I have calculated the projective transformation matrix. One image is the base image, and the other is a linearly translated image.

Now I have to define a larger grid and assign pixels from the base image to it. For example, if the base image is 20 at (1,1), on the larger grid I will have 20 at (1,1). and assign zeroes to all the unfilled values of the grid. Then I have to map the linearly translated image onto the base image and write my own algorithm based on "delaunay triangulation" to interpolate between the images.

My question is that when I map the translated image to the base image, I use the concept

(w,z)=inv(T).*(x,y)  
A=inv(T).*B  

where (w,z) are coordinates of the base image, (x,y) are coordinates of the translated image, A is a matrix containing coordinates (w z 1) and B is matrix containing coordinates (x y 1).

If I use the following code I get the new coordinates, but how do I relate these things to the image? Are my pixels from the second image also translated onto the first image? If not, how can I do this?

close all; clc; clear all;

image1_gray=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
figure; imshow(image1_gray); axis on; grid on;
title('Base image');
impixelinfo
hold on

image2_gray =imread('C:\Users\Javeria Farooq\Desktop\project images\j.pgm');
figure(2); imshow(image2_gray); axis on; grid on;
title('Unregistered  image1');
impixelinfo

% Detect and extract features from both images
points_image1= detectSURFFeatures(image1_gray, 'NumScaleLevels', 100, 'NumOctaves', 5,  'MetricThreshold', 500 );
points_image2 = detectSURFFeatures(image2_gray, 'NumScaleLevels', 100, 'NumOctaves', 12,  'MetricThreshold', 500 );

[features_image1, validPoints_image1] = extractFeatures(image1_gray, points_image1);
[features_image2, validPoints_image2] = extractFeatures(image2_gray, points_image2);

% Match feature vectors
indexPairs = matchFeatures(features_image1, features_image2, 'Prenormalized', true) ;

% Get matching points
matched_pts1 = validPoints_image1(indexPairs(:, 1));
matched_pts2 = validPoints_image2(indexPairs(:, 2));

figure; showMatchedFeatures(image1_gray,image2_gray,matched_pts1,matched_pts2,'montage');
legend('matched points 1','matched points 2'); 
figure(5); showMatchedFeatures(image1_gray,image3_gray,matched_pts4,matched_pts3,'montage');
legend('matched points 1','matched points 3'); 

% Compute the transformation matrix using RANSAC
[tform, inlierFramePoints, inlierPanoPoints, status] = estimateGeometricTransform(matched_pts1, matched_pts2, 'projective')
figure(6); showMatchedFeatures(image1_gray,image2_gray,inlierPanoPoints,inlierFramePoints,'montage');
[m n] = size(image1_gray);
image1_gray = double(image1_gray);
[x1g,x2g]=meshgrid(m,n) % A MESH GRID OF 2X2
k=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
ind = sub2ind( size(k),x1g,x2g);

%[tform1, inlierFramepPoints, inlierPanopPoints, status] = estimateGeometricTransform(matched_pts4, matched_pts3, 'projective')
%figure(7); showMatchedFeatures(image1_gray,image3_gray,inlierPanopPoints,inlierFramepPoints,'montage');
%invtform=invert(tform)
%x=invtform
%[xq,yq]=meshgrid(1:0.5:200.5,1:0.5:200.5);

r=[];
A=[];
k=1;

%i didnot know how to refer to variable tform so i wrote the transformation
%matrix from variable structure tform
T=[0.99814272,-0.0024304502,-1.2932052e-05;2.8876773e-05,0.99930143,1.6285858e-06;0.029063907,67.809265,1]

%lets take i=1:400 so my r=2 and resulting grid is 400x400
for i=1:200
    for j=1:200
        A=[A; i j 1];
        z=A*T;
        r=[r;z(k,1)/z(k,3),z(k,2)/z(k,3)];
        k=k+1;
    end
end

%i have transformed the coordinates but how to assign values??
%r(i,j)=c(i,j)
d1=[];
d2=[];
for l=1:40000
    d1=[d1;A(l,1)];
    d2=[d2;r(l,1)];
    X=[d1 d2];
    X=X(:);
end

c1=[];
c2=[];
for l=1:40000
    c1=[c1;A(l,2)];
    c2=[c2;r(l,2)];
    Y=[c1 c2];
    Y=Y(:);
end

%this delaunay triangulation is of vertices as far as i understand it
%doesnot have any pixel value of any image
DT=delaunayTriangulation(X,Y);
triplot(DT,X,Y);

Solution

  • I solved this problem by using these two steps:

    1. Use transformPointsForward command to transform the coordinates of image ,using the tform object returned by estimateGeometrcTransform

    2. Use the scatteredInterpolant class in Matlab and use command scatteredInterpolant to assign the transformed coordinates their respective pixel values.

    F=scatteredInterpolant(P,z)

    here P=nx2 matrix containing all the transformed coordinates

    z=nx1 matrix containing pixel values of image that is transformed,it is obtained by converting image to column vector using image=image(:)
    

    finally all the transformed coordinates are present along with their pixel values on the base image and can be interpolated.