I have a 2D numpy array(640X480) containing the depth value per each pixel which I obtained through a rendering system. Now I want to obtain point cloud of it. I tried a lot of methods but I have problem with rotation :
methods I tried:
color_raw = o3d.io.read_image("../../test_data/RGBD/color/00000.jpg")
depth_raw = o3d.io.read_image("../../test_data/RGBD/depth/00000.png")
rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
color_raw, depth_raw)
a = o3d.camera.PinholeCameraIntrinsicParameters
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
rgbd_image,
o3d.camera.PinholeCameraIntrinsic(
o3d.camera.PinholeCameraIntrinsicParameters.Kinect2DepthCameraDefault))
o3d.visualization.draw_geometries([pcd])
```
**Problem**:
a) I do not have an rgbd image. all I have is a 2d numpy array containing depth values for each pixel which can be saved as png image and used for the parameter depth_raw.
b) I tried to create a point cloud only from depth image using the function `open3d.geometry.PointCloud.create_from_depth_image(depth, intrinsic, extrinsic=(with default value), depth_scale=1000.0, depth_trunc=1000.0, stride=1, project_valid_depth_only=True)` but I get error when I try to pass the png image. how can I pass my numpy array into the required image format.
Please help me the above doubts to proceed further.
thanks in advance
Try this: depths must be a [n, n] numpy array, then
import open3d as o3d
vertices = []
depths = (your_depth_values)
for x in range(depths.shape[0]):
for y in range(depths.shape[1]):
vertices.append((float(x), float(y), depths[x][y]))
pcd = o3d.geometry.PointCloud()
point_cloud = np.asarray(np.array(vertices))
pcd.points = o3d.utility.Vector3dVector(point_cloud)
pcd.estimate_normals()
pcd = pcd.normalize_normals()
o3d.visualization.draw_geometries([pcd])