Search code examples
image-processingcomputer-visionpoint-clouds

Converting 320x240x3 point cloud matrix to 320x240x1 depth map


can anybody help me with the following problem using Python?

I have point cloud matrix got from a virtual camera, its dimension is 320x240x3, which indicates x,y,z coordinates of each point (points in camera view).

All values range from negative to positive. How can I convert this point cloud matrix to a 320x240x1 depth map which stores the positive depth value of each pixel? Thanks in advance.


Solution

  • Well, if you knew how to convert depth image to point map (that is your point cloud matrix), I guess you'll know the other way round.

    Given the associated camera intrinsics, using pinhole camera model, we can recover the point map using the following python code:

    import numpy as np  
    from imageio import imread, imwrite
    
    
    # read depth image in
    img = imread('depth.png')  
    img = np.asarray(img)
    
    # camera intrinsics, for demonstration purposes only. change to your own.
    fx, fy = 570.0, 570.0
    cx, cy = 320, 240 
    
    # suppose your depth image is scaled by a factor of 1000
    z = img.astype(float) / 1000.0  
    
    # convert depth image of shape to point map 
    # here we assume depth image is of shape (480, 640)
    px, py = np.meshgrid(np.arange(640), np.arange(480))  # pixel_x, pixel_y
    px, py = px.astype(float), py.astype(float)
    x = ((px - cx) / fx) * z 
    y = ((py - cy) / fy) * z 
    pmap = np.concatenate([i[..., np.newaxis] for i in (x, y, z)], axis=-1)
    

    Now back to your question.

    Assuming your point map is in camera coordinate system (you haven't translated or rotated pmap), to convert point map to depth image we shall do:

    # convert point map to depth image
    # that is, we are only using the points' z coordinates values
    depth = (pmap[:, :, 2] * 1000).astype(np.uint16)
    # now we can save the depth image to disk
    imwrite('output.png', depth)
    

    Hope it helps :)