Search code examples
pythonopencvtransformation-matrix

how to put a specific coordinate of a small image to a specific coordinate of a large image using translation matrix in python


I'm learning OpenCV and I'm looking for a code in python that get an input coordinate of a small image and map it to the coordinate of a large image so that a small image insert to the large image and it can be transform like rotating. I want to use translation matrix as an input to do that. For example if the matrix is:

([75, 120][210,320],
 [30, 90][190,305],
 [56, 102][250,474],
 [110, 98][330,520])

it means that pixel at (75, 120) in small image should map to pixel at (210, 320) in large image and pixel at (30, 90) in small image should map to pixel at (190, 305) in large image ... I searched a lot but I didn't get the proper answer to my problem. How can I solve this problem?


Solution

  • Inset small image in large one:

    import sys
    import cv2
    
    dir = sys.path[0]
    small = cv2.imread(dir+'/small.png')
    big = cv2.imread(dir+'/big.png')
    
    x, y = 20, 20
    h, w = small.shape[:2]
    big[y:y+h, x:x+w] = small
    
    cv2.imwrite(dir+'/out.png', big)
    

    enter image description here

    Resize and then insert:

    h, w = small.shape[:2]
    small=cv2.resize(small,(w//2,h//2))
    
    x, y = 20, 20
    h, w = small.shape[:2]
    big[y:y+h, x:x+w] = small
    

    enter image description here

    Insert part of image:

    x, y = 20, 20
    h, w = small.shape[:2]
    hh, ww = h//2, w//2
    big[y:y+hh, x:x+ww] = small[0:hh, 0:ww]
    

    enter image description here

    Rotating sample:

    bH, bW = big.shape[:2]
    sH, sW = small.shape[:2]
    ch, cw = sH//2, sW//2
    x, y = sW-cw//2, ch
    
    empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
    empty[y:y+sH, x:x+sW] = small
    
    M = cv2.getRotationMatrix2D(center=(x+cw, y+ch), angle=45, scale=1)
    rotated = cv2.warpAffine(empty, M, (bW, bH))
    big[np.where(rotated != 0)] = rotated[np.where(rotated != 0)]
    

    enter image description here

    Perspective transform sample:

    bH, bW = big.shape[:2]
    sH, sW = small.shape[:2]
    x, y = 0, 0
    
    empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
    empty[y:y+sH, x:x+sW] = small
    
    _inp = np.float32([[0, 0], [sW, 0], [bW, sH], [0, sH]])
    _out = np.float32([[bW//2-sW//2, 0], [bW//2+sW//2, 0], [bW, bH], [0, bH]])
    M = cv2.getPerspectiveTransform(_inp, _out)
    transformed = cv2.warpPerspective(empty, M, (bH, bW))
    
    big[np.where(transformed != 0)] = transformed[np.where(transformed != 0)]
    

    enter image description here

    And finally for mapping cordinates; I think you just need to fill _out:

    bH, bW = big.shape[:2]
    sH, sW = small.shape[:2]
    
    empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
    empty[:sH, :sW] = small
    
    # Cordinates: TopLeft, TopRight, BottomRight, BottomLeft
    _inp = np.float32([[0, 0], [sW, 0], [sW, sH], [0, sH]])
    _out = np.float32([[50, 40], [300, 40], [200, 200], [10, 240]])
    M = cv2.getPerspectiveTransform(_inp, _out)
    transformed = cv2.warpPerspective(empty, M, (bH, bW))
    
    big[np.where(transformed != 0)] = transformed[np.where(transformed != 0)]
    

    enter image description here