Search code examples
pythonopencvimage-processingperspectivecamera

How can I avoid parts of images from getting cut, when performing perspective warping?


I am trying to turn the perspective of an image so that I get a result that gives the front view perspective. I am using cv2.WarpPerspective function. However, on performing warp, some parts of the image are getting cut off. How can I avoid this? An option I thought is to find the transformation matrix for a specific part of the image and then apply that matrix to the whole image. However, that method is not yielding desirable results.

The code I am using is :

    import numpy as np
    import cv2
    from google.colab.patches import cv2_imshow
    img = cv2.imread("drive/My Drive/Images_for_Adarsh/DSC_0690.JPG")

    height,width = 1000,1500
    img = cv2.resize(img,(width,height))

    pts1 = np.float32([[ 250, 0],[1220, 300],[1300, 770],[ 250, 860]])
    pts2 = np.float32([[0,0],[width,0],[width,height],[0,height]])
    matrix = cv2.getPerspectiveTransform(pts1,pts2)


    print(matrix.shape)
    print(matrix)
    imgOutput = cv2.warpPerspective(img,matrix,(width,height))
    cv2_imshow(imgOutput)
    cv2.imwrite("drive/My Drive/PerspectiveWarp-Results1/0690_coarse/0690([[ 250, 0],[1220, 300],[1300, 770],[ 250, 860]]).JPG",imgOutput)

The input image: The input image:

The warped image: The warped image:


Solution

  • Here is one simple way to warp the image in Python/OpenCV and add extra space that will contain more of the input, but have areas outside the input as transparent.

    Input:

    enter image description here

    import numpy as np
    import cv2
    
    # read input
    img = cv2.imread("building.jpg")
    
    # resize
    height,width = 1000,1500
    img = cv2.resize(img, (width,height))
    
    # specify conjugate coordinates and shift output on left and top by 500
    pts1 = np.float32([[ 250, 0],[1220, 300],[1300, 770],[ 250, 860]])
    pts2 = np.float32([[+500,+500],[width+500,+500],[width+500,height+500],[+500,height+500]])
    
    # compute perspective matrix
    matrix = cv2.getPerspectiveTransform(pts1,pts2)
    
    print(matrix.shape)
    print(matrix)
    
    # convert image to BGRA with opaque alpha
    img = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
    
    # do perspective transformation setting area outside input to transparent
    # extend output size so extended by 500 all around
    imgOutput = cv2.warpPerspective(img, matrix, (width+1000,height+1000), cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(0,0,0,0))
    
    # resize output, since it is too large to post
    imgOutput = cv2.resize(imgOutput, (width,height))
        
    # save the warped output
    cv2.imwrite("building_warped.png", imgOutput)
    
    # show the result
    cv2.imshow("result", imgOutput)
    cv2.waitKey(0)
    cv2.destroyAllWindows()
    

    enter image description here

    Note: it should be possible to use the matrix to project the input corners to the output domain and compute the required output size to hold all of the warped input.